Yle's new principles of responsible artificial intelligence provide guidance on the use and development of artificial intelligence throughout the company

Yle has developed and implemented new principles of responsible artificial intelligence. The principles provide guidance on the use and development of artificial intelligence throughout the company.
Yle wants to develop responsible artificial intelligence in cooperation with various actors for the benefit of society as a whole.
The principles state that Yle uses artificial intelligence to better fulfil its public service duty and values. The main principle is that at Yle, people are always responsible for decisions concerning artificial intelligence and its outputs. Artificial intelligence solutions must not compromise Yle's reliability.
"AI includes opportunities that go to the core of public service. At Yle, artificial intelligence is never put to work without the constant monitoring of its outputs. The principles reflect Yle's public service values: reliability, independence and dignity," says Yle's CEO Merja Ylä-Anttila.
AI development work is carried out at Yle in close collaboration with Finnish partners, companies and researchers. The aim is to broadly serve Finnish society as a whole.
"At Yle, it is vital to everyone that the development of artificial intelligence reflects the diversity of people, culture and society. As a responsible public service operator, we cannot use artificial intelligence developed elsewhere, as we can’t see what’s under the hood. We want to promote the idea that AI solutions would support the Finnish media, democracy and people's understanding as effectively as possible," says Ylä-Anttila.
Yle is one of the first media organisations to draft principles of responsible artificial intelligence covering an entire company's operations. They serve as the starting point for more detailed guidelines in the company. The principles will be updated as necessary.