One of Tom Whitwell's '52 Things I Learned' (another excellent list this year as always) related to how the push button was a somewhat controversial interface when it was first introduced in the 1880s. Digging a bit further there's some interesting parallels with more current concerns about contemporary technologies and interfaces.
Prior to the new invention buttons were connected to mechanical levers or spring mechanisms (like a typewriter). Following the inception of the electric button in the 1880s (essentially an on/off for electric circuits) the technology began to emerge in multiple forms in many different places.
According to a study by Rachel Plotnick, the introduction of the electric push button caused a degree of consternation with some concern over whether the new interface would act to the detriment of human understanding about how things worked (the 'black box' idea of the inner workings being invisible and inaccessible). As this piece notes, Plotnick describes how different groups attempted to manage their fears about electricity, with some seeing the button as a way for users to 'avoid complicated and laborious technological experiences' and others believing that users should 'creatively interrogate these objects and learn how they worked as part of a broader electrical education'.
Some people did gain a working knowledge of how electricity (and electric push buttons) worked but the promoters of electric devices believed that using such interfaces should be 'simplistic and worry-free' (like the Eastman Kodak slogan for their cameras: “You press the button, we do the rest”). This 'electricity-as-magic', black-box approach seemingly won through with users simply not needing to understand the inner workings of what happens when you press an electric button.
Fast forward a century and we're facing the same dilemma with AI. How much (if anything) do users really need to understand about what's happening as a result of an AI-driven interface? Given the example of electricity it perhaps seems inevitable that the answer to that is 'not much'. But I do wonder whether (given the potential power that AI has to revolutionise so many things) there should be more effort to at least give users a basic working knowledge of the fundamentals of how AI is making decisions on their behalf.