> ... a translators’ and interpreters’ work is mostly about ensuring context, navigating ambiguity, and handling cultural sensitivity. This is what Google Translate cannot currently do.
Google Translate can't, but LLMs given enough context can. I've been testing and experimenting with LLMs extensively for translation between Japanese and English for more than two years, and, when properly prompted, they are really good. I say this as someone who worked for twenty years as a freelance translator of Japanese and who still does translation part-time.
Just yesterday, as it happens, I spent the day with Claude Code vibe-coding a multi-LLM system for translating between Japanese and English. You give it a text to be translated, and it asks you questions that it generates on the fly about the purpose of the translation and how you want it translated--literal or free, adapted to the target-language culture or not, with or without footnotes, etc. It then writes a prompt based on your answers, sends the text to models from OpenAI, Anthropic, and Google, creates a combined draft from the three translations, and then sends that draft back to the three models for several rounds of revision, checking, and polishing. I had time to run only a few tests on real texts before going to bed, but the results were really good--better than any model alone when I've tested them, much better than Google Translate, and as good as top-level professional human translation.
The situation is different with interpreting, especially in person. If that were how I made my living, I wouldn't be too worried yet. But for straight translation work where the translator's personality and individual identity aren't emphasized, it's becoming increasingly hard for humans to compete.