I remember interviewing Todd Yellin, the former Head of Product at Netflix, who told me that trying different things and falling down is a learning experience that helps you innovate. I also asked him about analytics and the future of storytelling, to which he responded that if AI could write stories and scripts better than human beings, he hopes it happens long after we’re all dead. That’s a critical statement from someone who played such an integral role in one of the most renowned companies in the world. I wonder what his reaction must be to the advent of generative AI in the second half of 2022 and even today.

PREMIUM
The role of ChatGPT has been steadily increasing over the months with the release of its third generation and in more recent times, its fourth iteration. (Reuters)

The role of ChatGPT has been steadily increasing over the months with the release of its third generation and in more recent times, its fourth iteration. For many, it’s deemed an attack on the content creation industry, but maybe, only on the surface. It’s a quick response, because it may not make the entire sector obsolete, but there is room for outsourcing. In fact, high-paid jobs are already being listed for jobs like prompt engineers with salaries going as high as $335,000 or about 2.7 crores a year. So, people are being paid to make the right or more innovative prompts for large language models, like ChatGPT. It does raise the question of whether this could mean that people would be more willing to leave their minds on the table and outsource their thinking to generative AI models, even more so, when people would be getting paid to make the human mind slightly more obsolete.

Calculators, for instance, had been an important business tool before the handheld calculator. When it started to make its way into the classroom about 50 years ago, there was heated debate about its effect on learning. There were even reports in the late 1980s that math teachers were physically protesting against the use of the calculator. On one hand, one could argue that calculators didn’t make accountants and similar professionals obsolete, they were able to integrate the calculator into their daily use to accelerate their output and value. On the other hand, when was the last time you gave yourselves twenty seconds to solve a mathematical problem instead of rushing to your phone and using the calculator? When was the last time you allowed yourself to think and solve and deal with the numbers, instead of relinquishing your thought process to the modern abacus?

There’s an argument that generative AI models would handle all the boring, monotonous and baseline work, leaving you to handle all the complex big stuff. But, a painful issue to raise is: with the Internet spoiling us over the years, conditioning us not to think and run to a browser to find out things, instead of looking deep into ourselves to pick out all the things we know about a particular subject and curating that for our own understanding, what’s going to happen? Simple things tend to snowball to become more complex. Does this further convert our dependence on technology into a crutch? Furthermore, are we even sure we have the capacity to do the complex work that generative AI would leave for us to do? Are you comfortable finding out?

According to the International Data Corporation (IDC), global spending on AI may be more than $110 billion by 2024. Especially now, keeping in mind how startup founders are looking at AI and generative AI. They may see that as just the tip of the iceberg for the use cases of generative AI in many different spaces and hope to seamlessly integrate it into their daily operations. What does this mean for professionals, though? If we can relinquish capabilities to an AI-powered system to do our thinking, do our writing, do our coding, think of the engineering, form some diagnoses, write the legalese, edit our videos, translate our languages and more, could this mean the end of the professional in the coming years? Are we already seeing the professional decline?

And what about startups trying to integrate AI? What it may mean is that AI would be used, not because it’s understood, not because it’s lucrative, not because it has potential, but for AI’s sake. And soon, people would see the companies that are going to soar on the back of the next big thing and they’d also see the companies that may not live up to the hype, collapsing like a house of cards. But, unlike previous crashes, where the utility was still a little way down the road, the penetration of AI in business may have already taken place.

But, nonetheless, maybe the hype of generative AI won’t be enough as a potential investment into startups. And with the funding winter leading many to sober up after the profligate party, it may all depend on the path to profitability for startups. And maybe the startup doesn’t have to reinvent the wheel with AI, just focus on moulding generative AI for specialist cases and maybe, that would be enough. Maybe, all it takes is creative use.

Right now, a company like OpenAI has said they’re not currently training ChatGPT-5 and they won’t for some time. But, if we ever get to something like a ChatGPT-10, how much more developed could it get and would it be a detriment for the average human being? And if we keep inputting ChatGPT the way we have to make it our external mind, could it tell you what to think and what not to think? Which makes the letter co-signed by Elon Musk and thousands of others demanding a pause in AI research much more interesting. Irrespective of whether that letter lacked verification protocols from people who did not actually sign it, the contents of the letter are something to think about and ponder whether innovation could really be paused or whether it’s inevitable beyond the development of systems more powerful than that of GPT-4. Or is it that we’re focusing on the wrong aspects of AI, like looking at it as an apocalyptic scenario waiting to happen, rather than focusing on making it more diverse and equitable?

Tech visionary Elon Musk, once a backer of OpenAI, has voiced concerns over AI’s potential for civilizational destruction and has called for government regulation. Is it that we tend to enjoy things and only after we’ve comfortably soaked in them, do we start questioning the ethics of it all? Would it be like knowing a beverage that’s bad for you and still consuming it? And then, when you get sick, shrug your shoulders and remark, “How could I have known?” Are we getting overawed with technology and allowing it to have a major impact on industries? If this is the next Industrial Revolution, who’s in charge and who is responsible? AI is all about automation and reducing the burden on the human mind. And with many companies looking to put out AI systems or integrate them into their product offerings, has the technology become unleashed as a dangerous beast or can it truly be put to use for the sole benefit of human beings?

Shrija Agrawal is a business journalist who has covered startups and private capital markets before it was considered cool in India

The views expressed are personal

Enjoy unlimited digital access with HT Premium

Subscribe Now to continue reading

freemium



Source link