In an IFTF course I found this signal: A 2019 survey by American educational nonprofit group Commonsense found that 43 percent of U.S. parents say their 6- to 8-year-old children use voice-activated assistants embedded in smart speakers, to help with homework.
I guess these percentages are going up in times of pandemic when education goes even more digital.
Is there any hope that the design of assistants and AI will be influenced by a more ethical model of data harvesting and management? I am skeptical about this happening, and as things stand now I feel that it’s best for everybody to avoid assistants and AI as much as possible. But I’m eager to know about any reason to think this might change for the better!
to make things worse, it’s hard to avoid AI since it’s used to nudge you into reading certain posts or articles, to watch certain videos and to listen to certain music. It’s not always nefarious, but it’s becoming an ever-present, ambient technology. There is quite some output by specialists in AI Ethics, and interest in that theme at the European Commission. Not sure whether Europe will innovate in this regulation as well.
Also worth noting is the U. Virginia Center for Data Ethics. I have two good friends working there, people whose moral compass is in excellent working order. But I don’t know much about the overall work of the Center.