OpenAI’s ChatGPT will ‘see, hear and speak’ in major update

[ad_1]
OpenAI’s ChatGPT is getting a major update that will enable the viral chatbot to interact with users using voice and images, making it compatible with popular artificial intelligence (AI) assistants like Apple’s Siri. Will take you closer.
The voice feature “opens the door to many creative and accessible focus applications,” OpenAI said in a blog post Monday.
Similar AI services such as Siri, Google Voice Assistant and Amazon.com’s Alexa are integrated with the devices they run on and are often used to set alarms and reminders and provide information from the Internet.
Since its launch last year, companies have adopted ChatGPT for a wide range of tasks, from summarizing documents to writing computer code, leading the way among big tech companies to launch their generative AI-based offerings. The race has begun.
ChatGPT’s new voice feature can also tell bedtime stories, moderate dinner table discussions, and speak text input aloud to users.
OpenAI said the technology behind it is being used by podcasters on the Spotify platform to translate their content into different languages.
With images support, users can take photos of their surroundings and ask the chatbot, “Why isn’t your grill starting, troubleshoot, your fridge to plan meals. Search for content, or analyze a complex graph for task-related data”.
Alphabet’s Google Lens is currently the popular choice for finding information about images.
ChatGPT’s new features will be rolled out to subscribers of its Plus and Enterprise plans over the next two weeks.
[ad_2]