Part of the reason OpenAI was established by Elon Musk and other notable Silicon Valley Tech pioneers was a concern in the misuse of AI and the raise awareness of the ethics of AI. OpenAI wants to research and develop solutions in AI and expose the issues and concerns it will create in a way that is then communicated to the public and not withheld in commercial and governmental circles.
Publishing the latest auto-text generator for fake news is in OpenAI’s mission to tackle ethical AI use and raise awareness in the public domain. The latest program called GPT-2 is not perfect in its ability to create sentences of well-defined stories based on the original political or social commentary. Complex general knowledge understanding is what the Alexa Prize of a 20-minute conversation is trying to emulate and it is for many AI natural language experts, one of the key milestones measuring how far away everyday AI computing is from reality. We are a long way from this general computer conversation ability we seen on episodes of Star Trek.
But, what this is showing is the alarming rapid progress of how some newsworthy stories now could be automated and manipulated with spin and fake messages which are a serious issue. Giving researchers access to these tools is a double edge sword that helps escalate the defense design work but Pandora’s lid is now open and a lot more work is urgently needed to protect trust and sources of real news in an age of popularist attitudes and global media.