AI tools behaving badly — like Microsoft’s Bing AI losing track of which year it is — has become a subgenre of reporting on AI. But very often, it’s hard to tell the difference between a bug and poor construction of the underlying AI model that analyzes incoming data and predicts what an acceptable response will be, like Google’s Gemini image generator drawing diverse Nazis due to a filter setting.
Now, OpenAI is releasing the first draft of a proposed framework, called Model Spec, that would shape how AI tools like its own GPT-4 model respond in the future. The OpenAI approach proposes three general principles — that AI models should assist the developer and end-user with helpful responses that follow instructions, benefit humanity...
from The Verge - All Posts https://ift.tt/C0QmB1V
via IFTTT
EmoticonEmoticon