
They say that the robots are coming for our jobs (I don’t know who this they is, I just heard it somewhere). I wonder though, if the real concern isn’t that robots will be taking over our jobs but that we have begun to adapt our expectations so that they might be met by a machine. I know I have, on more than one occasion, decided not to do something I wanted to on the computer because I couldn’t really figure out how.
The big concern in machine learning right now seems to be the “Value learning problem.” My first thought was that this was related to ethical issues and, I suppose, at it’s core, that is true. But in the shallow end, this is simply about teaching a machine which, of the oodleplex of choices, is the important, or even relevant, data (and I have dispensation to use the singular form of the verb, by the way).
So will the robots write their own manuals? There’s a bit in one of Terry Pratchett’s DiscWorld books (Jingo, maybe? or Thud?) where a Vimes’ technomantic PA (called a Dis-orgranizer because it’s powered by an imp) chides him for not reading the manual. To which his response is, “Did you read the Vimes manual?” The imp is confused but not a little hopeful that such a thing exists. My point is that I can’t imagine a machine (absent true Artificial General Intelligence), being able to understand and adjust to the rhetorical situation for any documentation need.
If you’re interested in keeping up with the AI writing trend, follow this guy’s blog – though I warn you, wait till you’ve finished your current assignment. It’s a serious rabbit-hole.
