I Debated the CEO of one of China's Biggest A.I. Companies about the Future of Translation — and the Outcome Surprised Me!
2018年3月27日
Jonathan Rechtman
Last week I had a truly amazing opportunity — to debate the future of AI and translation at the 2017 iAsk Debate in Beijing.
Going into the debate I was roughly equal parts excited and terrified.
Excited because I am passionate about the future of translation and interpreting (my company, Cadence, and five other startups recently launched the Interpreting Technologies Alliance to explore just this topic), and because I think a lot about AI generally and have some (imo) pretty thought-provoking things to say about the role of humans and machines in our society moving forward
Terrified because I'm not, well, actually an AI expert whereas my opponents in the debate were literally pioneers in the field, each with decades of practice in AI technology and product development. Oh, and because the debate was being live-streamed and recorded for television broadcast. Entirely in Chinese.
So what was I doing in the hot seat?
To put it bluntly, I was the foil -- the human translator, the highly-skilled non-machine. I wasn't just representing myself, or my company, or my profession — I was meant to represent all knowledge workers — the lawyers and auditors and journalists and engineers — whose jobs may hang in the balance as artificial intelligence zooms closer and closer to our own. I was the face of the new AI economic refugee.
My opponents were the brains behind the machines that might replace me:
Zhang Qi, an A.I. partner engineering director at Microsoft, has 20 years of AI/ML experience, 15 publications, and 12 patents under his belt. He leads a huge team at Microsoft dedicated to the commercialization of AI research.
Hu Yu is the CEO of iFlytek, one of China's most well-established AI companies in the field of NLP and voice recognition that is very active in machine translation and speech-to-text (he himself has a PhD in Signal Processing). iFlytek's broad range of AI-powered voice products have begun to garner attention in Western media recently, but in China they're perhaps most visible at international conferences where they offer “instant subtitling” for speeches in English and Chinese, an application that has many people speculating whether they're gunning to eventually replace human simultaneous interpreters. (Though a recent encounter with "instant subtitling" and celebrity hedge fund manager Ray Dalio led to absolute hilarity and left me confident that humans are still the best game in town).
Together with my debate partner — the equally impressive Qiao Huijun, a partner at Hongtai Capital and the founder of an AI-incubator called A-plus Labs — I was tasked with arguing the “threat” side of the AI debate: that artificial intelligence will have a net negative impact on humans.
Mid-debate with Mr. Hu Yu, CEO of iFlyTek
Our team started out at a considerable disadvantage, with pre-debate polling showing that 70% of the live audience believed AI would serve humanity rather than threaten it. The burden of changing minds fell to us, and we set about it with aplomb.
I used my opening statement to lay out a fairly classical argument centered around economic dislocation and the inability of our social and political institutions to adapt in the face of exponentially accelerating technological change.
I also described how developments in simultaneous interpreting might illustrate the decline in value-add from human labor relative to our tools. The thinking goes something like this:
Simultaneous interpreting is already a collaboration between humans and machines. Our brains do most of the heavy cognitive lifting, but we wouldn't be able to deliver the product without headphones and microphones and transmitters. The key is that humans create the real value-add, and the machines function as a supporting tool.
As these tools get more sophisticated, they'll be able to take on some of the cognitive load as well. The targeted use of instant speech-to-text and translation memory will help human interpreters to work more accurately and for longer periods of time. In this sense, interpreters — and really all knowledge workers — will experience huge gains in productivity powered by machine intelligence over the next decade or so.
As the machines move up the value chain (very likely at an exponential rate), the question is whether or not those productivity gains will be fairly distributed between human workers, technology owners, solutions bundlers, and consumers.
I argued that it won't be, that virtually all human labor will wind up being commoditized (yes, even creative labor) — and that the bulk of all economic value-add will be captured by an increasingly elite minority of corporate technology owners (think the bad guy in Blade Runner), and — should an artificial intelligence ever become self-aware enough to consider controlling its own resources — by the technology itself (think the Matrix).
I like to say that in the future the only two human job on the planet will be public administration (setting top-level goals for the machines to optimize around and possibly rules to proscribe them) and private equity (distributing resources between various vast A.I. systems). Those two jobs might employ a few thousand (maybe up to a hundred thousand) humans worldwide; but if you want to do meaningful economic work in the second half of this century and you don't make the cut in P.A. or P.E., you better go colonize Mars.
The idea that the rest of us can all be poets, teachers, and yoga instructors strikes me as dangerously predicated on the assumption that productivity gains will be distributed such that we all live comfortable middle-class lifestyles without having to work. Everything in the social politics of our day is trending in the other direction: a widening of the wealth gap, a weakening of civic institutions, and an increasing disregard for the privacy, welfare, and dignity of the economically disempowered.
This is all, of course, just the economic side of A.I. threat theory — you can read the sci-fi canon for more gleeful visions of super-intelligent systems manipulating, enslaving, or exterminating the human race.
My partner Qiao Huijun took a different tack, acknowledging that humans might not be replaced by machines but will more likely co-evolve with them, eventually yielding a cybernetic new species of machine-augmented super-humans. This evolution might be great for those that evolve — who knows, maybe this new species will avert climate change, end poverty, war and suffering — but evolution does generally involve the diminishment and/or annihilation of the previous incumbent. Just ask a Neanderthal what he thinks of Darwin.
Our opponents naturally took a moderate position: Zhang Qi described some of the more miraculous potential benefits of applied AI — in helping to cure Parkinson's disease, for example. Hu Yu shared research identifying which professions would be most impacted by the rise of artificial intelligence — primarily those involving repetitive cognitive tasks — but suggested that humans would retain an advantage (and employability) in the creative fields.
Interestingly, he identified translation and interpreting as precisely at the crossroads of “cognitive” and “creative”. There are aspects of simultaneous interpreting that machines are probably much better-suited to than humans (accurately interpreting long numerical strings, for example), and other aspects in which humans will maintain a strong and possibly unsurmountable advantage (such as in interpreting humor and cultural references).
In the end, we agreed that the introduction of AI tools to assist human interpreters will lead to higher quality communication for end users (a solid benefit!) over the short- to medium-term. Whether in the long-term the relationship flips, with the AI doing all the talking and leaving humans out in the cold... that remains to be seen.
At the end of the day, our “threat” team lost the debate, but we did manage to close the audience poll gap from 70-30 to 55-45, so everyone left feeling pretty good about themselves.
And as for me... well, I'm still not really an AI expert, but I'm proud to have held my own with the best of them and remain excited about the future — for humans and machines alike!