How artificial intelligence-powered chatbot can assist scientists to decode the Red Planet
ChatGPT, an artificial intelligence-powered large language model, has caught the world’s attention as it disrupts work across sectors.
While businesses are using ChatGPT to streamline workflows, can the new chatbot accelerate research publication, especially in space exploration and planetary science?
AI isn’t new in the field of space exploration. NASA’s mission planners at the Jet Propulsion Laboratory currently use AI tools to create models that help them determine probable outcomes of different scenarios and examine various mission parameters.
Then there is CIMON 2, a robot built by Airbus at the German Aerospace Center that uses IBM’s Watson. The emotionally intelligent ball-shaped robot was sent to the International Space Station to help astronauts manage tasks and reduce stress.
So, how can a chatbot like ChatGPT help researchers study space, in particular the Red Planet?
Meet ChatGPT, the author
Consider a scenario where a robot sent to Mars has access to either ChatGPT or a similar AI language model. The AI tools can, in theory, analyse data from the scientific instruments on the robot. This in turn could lead to researchers receiving “on the spot” findings in plain language.
Not just this, but the data could then be assembled and assessed in a scientific format. From there on it would be ready to be transmitted to be published in a scientific journal, after peer-reviewing the paper of course.
Steve Ruff, Associate Research Professor at Arizona State University’s School of Earth and Space Exploration in Tempe, Arizona is concerned about AI publishing results.
“My immediate reaction is that it’s highly unlikely that ‘on-the-spot’ manuscripts would be a realistic scenario given how the process involves debates among the team over the observations and their interpretation,” Steve told Space.com.
The main issue with ChatGPT is the knowledge cutoff—the data on which the model works is updated until 2021 (although certain limited plugins now allow the bot to access the internet).
Experts wonder whether this smart tool could be subject to “hallucination”. So, the advice is to avoid using it in cases where accuracy cannot be compromised. Hallucinations in AI mean that the bot could give answers that may appear plausible but are either factually incorrect or not relevant.
“It could be done but there could be misleading information,” Sercan Ozcan, a Reader in Innovation and Technology Management at the University of Portsmouth in the United Kingdom, told Space.com.
Amy Williams, Assistant Professor in Geological Sciences at the University of Florida in Gainesville believes that while tools like ChatGPT could assist researchers, they can’t replace human input, according to Space.com