Defining AI

The phrase ‘artificial intelligence’ is used in so many contexts these days (some of them a little disingenuously) that it is worth beginning with a definition. The Merriam-Webster dictionary covers it concisely:

1. A branch of computer science dealing with the simulation of intelligent behavior in computers; 

2. The capability of a machine to imitate intelligent human behavior.

In practice, this means computer scientists program computers -- including robots and the operating systems of other devices -- with algorithms to try to learn and mimic the patterns of human thought (termed ‘strong AI’ when it works). Or they use algorithms to process huge amounts of data through brute strength, in order to draw conclusions that would be costly or impossible for human minds to compute ('weak AI').  We see a whole lot of the latter going on around us today, while the former is still emerging from the realm of science fiction.

Robot brains on campus

In the classroom, you are currently more likely to experience weak AI. But weak AI’s superpower is to take repetitive work out of teachers’ hands and deal with it automatically. For example, machines can now read handwriting and compare answers against an answer sheet, meaning that teachers of some subjects, such as mathematics, can concentrate on tutoring while a robot gives the grades. AI can also churn through student data to figure out personalized learning plans, or spot worrying (or promising) trends in a student’s performance that may be unnoticeable to the human eye.

AI can’t yet tell a good essay from one with shaky reasoning, nor are its language skills quite as advanced as general use of Google Translate might suggest. But artificial intelligences are now starting to teach online courses. In fact, calling them ‘automated’ intelligences might be closer to the truth. So far, online teachers mostly rely on complex decision paths to guide students on a tailored route through an essentially rigid structure. In other words, robots are already replacing, but not surpassing, human teachers. 

Despite the debut of Yuki, the world’s first robot lecturer, in Germany last year, teaching is still considered a “creative, insightful, collaborative, soul-enriching" human activity. It is unclear whether students will learn as deeply from an electronic professor as they can from a flesh-and-blood teacher -- though it may depend on the personal charisma of the latter! 

We will certainly begin to see more and more automation coming into higher education. Chatbots will respond to queries to take the burden off of professors’ inboxes. Intelligent tutoring will assist with homework. And learning environments will more and more begin to resemble the world of Minority Report (or, perhaps, Red Dwarf).

How to become an AI master

Regardless of whether you will be taught by human or machine, what is the best major for a career in artificial intelligence? A growing number of universities provide degrees in artificial intelligence. Many others offer broader computer science degrees with the option to weight your program towards AI. And others deliver courses with intriguing AI-themed names, such as Indiana University’s MS in Human-Computer Interaction Design.

The world will also get its first University of Artificial Intelligence this September, in Masdar City – a smart ‘new town’ under development in the United Arab Emirates. “AI is already changing the world,” declares Dr. Sultan Ahmed Al Jaber, UAE’s Minister of State. “But we can achieve so much more if we allow the limitless imagination of the human mind to fully explore it. The invention of electricity, the railroad, smartphones all transformed the world as we knew it. AI can lead to an ever greater societal and economic transformation, but first we must ensure we have the right infrastructure, talents, and academic institutions.”

However, a degree in artificial intelligence or computer science is not the only route into AI. If you have a particular strength or passion, it might be just as beneficial to major in that, and develop your AI-specific skills through electives, work experience, and on-the-job training. 

A science major such as mathematics, statistics, data science, or cognitive science will provide valuable knowledge and understanding for an AI career. Other sciences such as biology, physics, and neuroscience each inform and inspire the solutions that AI engineers find. But don’t rule out a liberal arts degree if that’s where you will apply yourself most enthusiastically. Arts students develop skills that are valuable to the AI field, including language/linguistics, communication, reasoning, and emotional intelligence.

Applying AI beyond college

Artificial intelligence is one of those areas where you will always be able to find or initiate work around a theme you find fascinating. The technology is manifesting in every walk of life, to the extent that “we can build an AI to fix this” has become a cliché. It is essential for the new class of AI experts to prioritize the human factor, particularly the impact of ubiquitous AI on the most vulnerable and underrepresented among us.

The responsibilities that we are preparing to hand over to artificial intelligence vary wildly on the scale of 'mundane' to 'Jetsons'. Self-driving cars are one of the most pressing and headline-grabbing developments. Early incarnations with limited autonomy may make their commercial debut on our roads before the year is out. 

But the ethical, legal, and logical barriers holding back ‘true’ self-driving cars are as significant as the technological hurdles the motor industry faces. In fact, the self-driving car is the perfect case study for many of the riddles we will need to solve if we are to integrate artificial intelligence into our daily lives without unforeseen adverse effects.

For example, imagine your car is driving towards a school crossing when an out-of-control truck veers towards you. Your car must decide whether to save you by accelerating through the crossing or to save the kids on the crossing by allowing you to perish under the wheels of the truck. How should your car’s 'brain' be programmed to make calls like this?

There are tech issues, too. The first self-driving car fatality occurred in 2016. It was a moment when the dreams of science-fiction writers crystalized in the most mundane yet tragic form. And it occurred due to a 'misperception' by the autopilot: its sensors failed to distinguish the side of a white truck from the bright white sky beyond.  

Intelligent systems embody our own perceptual shortcomings. These shortcomings may manifest through a failure to imagine a potential crash scenario. Or when administrative systems adopt conscious or unconscious human biases – such as predictive policing, which unfairly targets black people

“[A]lgorithms are opinions embedded in code,” as Harvard mathematician Cathy O’Neil puts it.

An entity that cannot be defeated

But AI is also capable of doing super-impressive stuff. The world’s greatest player of Go, the ancient Chinese board game, is retiring from competition -- having been defeated by an opponent named AlphaGo. “Even if I become the No. 1,” said Lee Se-dol, “there is an entity that cannot be defeated.” Human beings have worked on becoming Go experts for over 2,500 years. They believe intuition is as valuable to the Go player as analysis. The robot mind of AlphaGo suggests otherwise.

In Accra, the capital of Ghana, Google has opened an AI research center, empowering local tech experts to figure out solutions to local problems in agriculture, education, and health. One example is the ability to analyze plant disease using a smartphone app. The work is open source, so it benefits from the input of its users -- farmers literally out ‘in the field.’ The researchers are also working proactively to counter the algorithmic biases that blight some AI developments.

In the United States, military development of AI applications and weaponry has prompted a review of the ethical, moral, and legal framework within which smart/automated systems should operate. The Defense Innovation Board (DIB) has developed an Isaac Asimov-style set of principles for AI. In his short story Runaround (1942), Asimov famously introduced ‘Three Laws’ prohibiting robots from injuring or disobeying humans, or allowing themselves to come to harm. DIB’s five rules apply more to the developers: military AI should be responsible, equitable, traceable, reliable, and governable. Of course, who gets to pick the laws and decide what is responsible, equitable, etc. remains a human affair – and they can always change the rules to suit them.

It is clear artificial intelligence is a matter of enormous potential and enormous responsibility for those with the ambition to build our brave new world. From shaping policy to building robots, from solving impossible problems to devising new modes of thought, professionals in artificial intelligence may be the guardians of our future. Need more input? Go study!