Artificial intelligence: The future of food or the end of humanity?

By Flora Southey

- Last updated on GMT

For some, including those keeping a keen eye on AI’s potential to optimise food and agriculture production, the technology presents an opportunity to achieve what has previously been impossible: more food with fewer natural resources. GettyImages/onurdongel
For some, including those keeping a keen eye on AI’s potential to optimise food and agriculture production, the technology presents an opportunity to achieve what has previously been impossible: more food with fewer natural resources. GettyImages/onurdongel
While some fear AI spells the end of humanity, others are optimistic the technology will optimise food production and achieve ‘what has previously not been possible’.

“Mitigating the risk of extinction from artificial intelligence (AI) should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

This was the succinct statement published by non-profit the Center for AI Safety this week, signed by executives at OpenAI and Google DeepMind; university professors of machine learning, computer science, and philosophy; and Bill Gates, amongst others.

The statement comes a week after warnings were sounded by OpenAI, the developer of artificial intelligence chatbot ChatGPT, about the technology’s potentially harmful risks. According to OpenAI’s founders, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. ‘Existential risk’, they suggest, is a distinct possibility.

But not all working with AI are as concerned about a potential doomsday scenario.

For some, including those keeping a keen eye on AI’s potential to optimise food and agriculture production, the technology presents an opportunity to achieve what has previously been impossible: more food with fewer natural resources.

A new wave of artificial intelligence

The term artificial intelligence ​was first inked in 1955, but according to Puneet Mishra, a researcher at Wageningen University & Research (WUR) in the Netherlands, its true application only started ‘very recently’.

Today, artificial intelligence is founded on automated machine learning, whereby AI-powered bots can take decisions and give commands for future operations.

The ‘biggest bottleneck’ for AI adoption has always been data availability, explained Giuseppe Lacerenza, investor and operator at Slimmer AI, a Dutch company that helps builds B2B SaaS applied AI businesses. Traditionally, data has been stored in different silos, and often in different formats.

But this is changing, suggested Lacerenza at F&A Next, an event hosted by Rabobank, Wageningen University & Research, Anterra Capital and StartLife last week in the Netherlands. Nowadays, big corporates are restructuring their data, and start-ups are setting up data architecture from the get-go, all with AI in mind.

And it’s not just about data, which the Slimmer AI executive said has been ‘growing massively’. It’s also about developments in the ‘feedback loop’ – essentially the machine’s ability to learn from data. Combined, these two aspects serve to improve AI’s optimisation function.

“The real opportunity that I see is moving from what has been a local level of optimisation, to a global level of optimisation…and I think the real understanding today is that barriers to the adoption of AI are decreasing day-by-day,” ​Lacerenza told delegates at the event.

The AI potential in food and agriculture

So what does this optimisation look like in terms of agri-food production?

The use of AI by agriculture and food industries is less developed than in the finance and medical sectors, amongst others, explained Mishra. This, again, comes down to data. “There is structured data [in these sectors], but in the case of food and agriculture, there is not as much structured data available…this has led to [lower] adoption of AI in this field.”

The tide is turning, however. In recent years, more effort has been put into collecting or combining data, and even using AI to generate data that may be lacking.

This is the case for WUR’s agricultural project leveraging AI technology to optimise crop cultivation with fewer natural resources. Leveraging AI in such settings is expected to reduce energy use and labour costs. The project, coined Autonomous Greenhouses, uses AI to trains the robots operating within these autonomous farms or greenhouses.

But the robots cannot be trained for every single agricultural situation with real-world examples. Instead, the research team simulates greenhouse environments and plant structures, to help the robot identify a greater variety of crops and situations. “Then the [robot] can work in the real greenhouses later, and take the decisions [required] about harvesting, or sorting [crops] based on their quality etc.”

AI can also be used in food manufacturing, for example in chocolate production. “We do a lot of research in the area of chocolate… Manufacturing chocolate through the use of sensors, and then combining data from these sensors with AI. [We] then take control of the process and optimise it to make it more efficient,” ​said Mishra, adding that the two primary targets in this process is achieving quality and consistency.”

Regulating to take ‘full control’ of AI

In focusing on automating crop production or optimise the manufacturing of chocolate, it’s easy to disassociate AI with the aforementioned ‘existential threat’.

And while Mishra said there have always been concerns around AI – and it ‘spelling the end of humanity’ – that’s not his personal view. “I think AI can help us, we can use it to our own benefit and do tasks that were not possible beforehand,” ​he told delegates.

What is required to keep any potential threats at bay is regulation. It should not be possible for AI to generate non-sensical information disguised as facts, Mishra suggested.

Slimmer AI’s Lacerenza agreed. “It comes down to the transparency behind the data used to train AI, which can carry bias,” ​he told delegates.

“Regulation needs to push towards the ‘explainability’ of AI, because that will give us, as human beings, full control of a super helpful tool that today comes across as a bit ‘out of control’.”

Related topics Regulation & safety

Related news