Most Read

Top stories

What Will Programmers Do After the Artificial Intelligence Revolution

Would you rather be a Deep Network Surgeon or a A.I. Introspection Engineer?

What Will Programmers Do After the Artificial Intelligence Revolution

Artificial intelligence and automation are a source of palpable anxiety in our culture. Exciting technological advances are mingled with op-eds, statistics and predictions about negative implications for future employment: what will work and career opportunities look like in a future where computers learn, repair themselves and even come up with new solutions?

Futurist Thomas Frey gave a TED talk in which he predicted 2 billion jobs would disappear by 2030--but don’t be alarmed! He believes new jobs will be created in tandem with these losses, resulting in a net balance. Driverless car operating system engineers will replace taxi and limo drivers. Construction industry jobs will shift to 3D printer repair technicians. Rather than a lack of jobs, Frey suggests that “our challenge will be to upgrade our workforce to match the labor demand of the coming era.”


Credit: Source.

Riccardo Campa, a sociologist at the University of Cracow, reviews how the fears and realities of technological unemployment have evolved from 350 B.C.E. to the present in his essay Technological Growth and Unemployment: A Global Scenario Analysis. Like many, Campa sees parallels between the Industrial Revolution and today’s Artificial Intelligence Revolution. During the 18th and 19th centuries, technology caused short-term mass unemployment among low-skilled workers. Many people feared this would lead to irreversible poverty and social collapse. But the most-feared outcomes never materialized: in the long run, the new technologies created more new jobs than they destroyed.

This leads many to expect that this will always be the case: technology changes jobs, it doesn’t destroy them. “The theory that technological change may produce structural unemployment has been repeatedly rejected by neoclassical economists as nonsense and labeled ‘the Luddite fallacy,’” Campa explains. “These scholars contend that workers may be

expelled from a company or a sector, but sooner or later they will be hired by other companies or reabsorbed by a different economic sector.” The interesting question then becomes: exactly how will jobs change?

How programming changed psychology

In a recent article for Wired, Jason Tanz looks at the way technological changes have impacted a relatively recent profession: computer programmers. His article, "Soon We Won’t Program Computers. We’ll Train Them Like Dogs," looks at the coevolution between computer science and the study of the human mind since the 1950s.

Before the invention of the programmable computer, most experimental psychologists studied associations between stimuli and responses. Called “behaviorists,” they viewed the mind as an unobservable “black box.” The most famous behaviorist was Pavlov, studying the relationship between the ringing of a bell and the response of his dog.

With the advent of computer programming, however, experimental psychologists began to think of the human mind like a computer program. Experimental psychology became a task of figuring out the steps of our “mental program.” This shift, referred to by those in the field as the Cognitive Revolution, happened in 1956 (according to George Miller, one of the founders of the movement), and was the driving force of both the study of the mind and artificial intelligence for 30 years.

And how neuroscience changed programming

As the parallel between minds and computer programs was more fully explored, programmers realized that they could get insight from research on the brain as well. Instead of writing programs by hand-crafting a step of step-by-step rules to follow, they would begin with a largely random set of simple units (similar to neurons) and allow their parameters to change over time in order to “learn” how to solve problems. These were the

first “artificial neural networks,” and by the early 1980s they were all the rage in artificial intelligence research. Both psychologists and computer scientists were excited about the promise of mathematical “learning algorithms” that would let computer programs effectively create their own solutions to problems.

In the decades since, this approach to programming has expanded well beyond the field of artificial intelligence. The current buzzword is “deep learning,” which is nothing more than a particularly complex style of artificial neural network that uses advanced learning algorithms to solve big data problems. It is being used by Google and other big research organizations to mine social network data, create complex three-dimensional models from photographic data, and even diagnose medical problems. (Check out this list of some current real-world deep learning use cases.)

Credit: Source.

Tanz points out that this history has almost brought us full circle: deep learning networks are very effective in discovering solutions to complex problems; but they are almost completely opaque. Much like the early behaviorists, programmers using state-of-the-art deep learning algorithms often have no idea how or why they work. They train the systems on input patterns, and the complex mathematical routines adjust themselves to produce a solution.

Andy Rubin, one of the co-creators of the Android operating system, is quoted in Tanz’s article: “People don’t linearly write the programs. After a neural network learns how to do speech recognition, a programmer can’t go in and look at it and see how that happened. It’s just like your brain. You can’t cut your head off and see what you’re thinking.” The pendulum

has swung back to the “black box,” with programmers acting like behaviorists: they link inputs to outputs, training programs like Pavlov’s dog.

That’s the story up to the present day. But if we take the swinging pendulum idea seriously, we should be able to go even further. We can look to history to predict what will happen next.

The next steps in programming evolution

Today’s headlines tell us the Artificial Intelligence Revolution is changing what it means to be a computer programmer. Even the general term “programmer” has fallen out of fashion, since employers and resume-writers prefer more specific terms: mobile app developer, for example, or enterprise solution engineer.

Credit: Source.

What will the next phase look like? We can look to the history of the Cognitive Revolution to find out.

Learnability Analysis.Noam Chomsky often receives credit for putting a major nail in the coffin of Behaviorism with his research on language-learning in infants. He showed that it would be impossible for infants to learn grammar based just on the feedback they get from the outside. Parents simply don’t consistently correct young children on grammar. Infants learn how to use language, Chomsky proposed, by having customized, language-specific “programming” in the brain that tells us how language should work.

One of the new specializations within engineering over the next 20 years will be the Learnability Analyst: the engineer who is trained specifically to examine a task and determine what aspects of that task can be learned by a deep-learning algorithm, and what aspects must be customized ahead of time. This is a complicated endeavor, because any large-scale information processing problem--whether it’s examining customer buyer behavior across the country, or searching for subtle clues about potential terrorist attacks--is likely to require both some level of customized, hand-crafted knowledge and some

amount of general learning. It will be a full-time job, and even a career, to discover exactly where those boundaries lie and how much of each to include in any given machine learning system.

Cross-Cultural Machine Learning.Another line of research launched by Chomsky was Cross-Cultural Cognitive Linguistics. Cross-cultural research in language existed prior to Chomsky, of course; but Chomsky’s ideas about the human mind having “built-in machinery” for language prompted cross-cultural studies about what different languages can tell us about the structure of the mind. An important field of research in the future will be a comparative study of deep learning systems trained using radically different assumptions or inputs. This will become increasingly important as governments and companies all over the world create proprietary deep learning systems that solve similar problems in radically different ways.

A “cross-cultural” machine learning analyst would compare different systems to discover what characteristics depend on the quirks of the people programming it, and what characteristics are truly universal for solving the problem at hand.

Credit: Source.

A.I. Introspection Software.Herbert Simon developed a “thinking out loud” method to study the steps people use to solve complex problems like playing chess. By training people in a standardized way to report on their own thought-processes, Simon provided a method psychologists could use to get insight into what “program” the human mind was running while it operated.We already see people frustrated with the “black box” of millions of “connection parameters” that large-scale deep learning machines produce. One solution is to create software that we can train to report on the internal processes of other deep learning software--just as Herbert Simon taught human experimental subjects to “think out loud” while playing chess.

This is no small task, and will require very specialized mathematical and programming skills. Within 20 years, our data mining and deep learning procedures may be so good that we can

predict the probability of a car accident happening in a particular location up to an hour in advance… but that won’t be the impressive or complicated task. The complicated task will be developing the software that the car accident prediction software will use to explain to its human owners exactly how makes its  predictions.

Network Semantics Interpretation. In neuroscience, Horace Barlow discovered there are neurons in the visual cortex that respond to specific shapes, patterns, and types of movement. He used this to assign meaning to the function of different neurons, calling them “feature detectors.” These methods evolved over the decades into sophisticated techniques for studying the responses of populations of neurons and doing a kind of “semantic analysis”: based on that raw physical data, what meaning can we attach to this neuron? Is it a “face detector”?  Is it an “emotion detector” or a “decision cell” or even a “lie detector”?In the next 20 years, one field of specialization will be cracking open deep learning networks and doing a semantic analysis of the millions of parameters that exist within. Currently, neuroscientists use selective stimulation of different areas of the brain, or will look at patients with damage to different areas, to interpret the meaning or function of those areas. The future’s Network Semantics Interpreter will use similar techniques, but will apply them to enormous artificial neural networks housed in the servers of research institutes and mega-corporations.

As our artificial computational systems become more like neural systems, the way we deal with our programs will become even more heavily influenced by neuroscience and psychology. We will need Deep Network Surgeons who can go in and remove damaging “tumors” that are causing errors in our largest learning networks. We will need “A.I. Cognitive Behavioral Therapists” who can un-train a network that has acquired responses we don’t like.

Credit: Source.

Eventually, of course, even those jobs will become obsolete: there is no reason to believe, after all, that we can’t also train a robotic or artificial system to perform learnability analysis, or cross-cultural A.I. research, or any of the other tasks described here. Riccardo Campa and many of his futurist peers believe we must prepare for a future where work been phased out completely. In the end, they are almost certainly correct--although the time frame for that happening is anyone’s guess.

In the meantime, today's coders can look forward to productive careers in A.I. Introspection Training, Deep Network Surgery, and whatever else the next wave of innovation brings.