What I find interesting is the implicit priorisation: explainability, (human) accountability, lawfulness, fairness, safety, sustainability, data privacy and non-military use.
I agree, though I would prefer to highlight the first half of the first item - transparency. Also, perhaps make Safety an independent principle than combining with Security.
These are a good set of principles for any company (or individual) can follow to guide them how they use AI.
Good guidelines. My primary principle for using AI is that it should be used as a tool under my control to make me better by making it easier to learn new things, offer alternative viewpoints. Sadly, AI training seems headed towards producing ‘averaged behaviors’ while in my career the best I had to offer employers was an ability to think outside the box, have different perspectives.
How can we train and create AIs with diverse creative viewpoints? The flexibility and creativity of AIs, or lack of, guides proper principles of using AI.
I'm not optimistic about this in the short term. Creative and diverse viewpoints seem to come from diverse life experiences, which AI does not have and, if they are present in the training data, are mostly washed out. Statistical models are like that. The objective function is to predict close to the average output, after all.
In the long term I am at least certain that AI can emulate anything humans do en masse, where there is training data, but without unguided self evolution, I don't see them solving truly novel problems. They still fail to write coherence code if you go a little out of the training distribution, in my experience, and that is a pretty easy domain, all things considered.
The vast majority of advances seem to be of the form "do X for Y", where neither X nor Y is novel but the combination is. I have no idea whether AI is going to better than humans at this, but it seems like it could be.
It is a organisation wide document of "General principles", how could it possibly have something more specific to say that about the inherently context specific trade-offs of each specific use of AI?
You don't even need to go as far as saying someone didn't follow the policy, you can just say you need to review the policies. That way, conveniently enough, nobody is really ever at fault!
Organizations above a certain size absolutely cannot help themselves but publish this stuff. It is the work of senior middle managers. Ark Fleet Ship B.
I work in a corporate setting that has been working on a "strategy rebrand" for over a year now and despite numerous meeting, endless powerpoint, and god knows how much money to consultants, I still have no idea what any of this has to do with my work.
In such scientific environment, There are gentlemen agreements about many things that boils down to "Don't be an asshole" or "Be considerate of the others" with some hard requirements at this or that point for things that are very serious.
What's so special about military research or AI that the two can't be done together even though the organization is not in principle opposed to either?
CERN is in principle opposed to military research. That and stuff like lawfulness, fairness, sustainability, privacy are just general CERN principles restated for fluff.
One reason I can think of is with regard to confidentiality. A lot of AI services are controlled by companies in the US or China, and they may not want military research to leak to these countries.
Classified project obviously have stricter rules, such as airgaps, but sometimes, the limits are a bit fuzzy, like a non-classified project that supports a classified project. And I may be wrong but academics don't seem to be the type who are good at keeping secrets nor see the security implication of their actions. Which is a good thing in my book, science is about sharing, not keeping secrets! So no AI for military projects could be a step in that direction.
> CERN’s convention states: “The Organization shall have no concern with work for military requirements and the results of its experimental and theoretical work shall be published or otherwise made generally available.”
CERN was founded after WW2 in Europe, and like all major European institutions founded at the time, it was meant to be a peaceful institution.
sure, though "have no concern with" comes across to me less like ""we avoid building anything that could be conceivably used as a weapon by anyone", and more as "We're not in that business, but it's not our concern if you manage to stab yourself with it. It's not secret".
Fixed that for you. That's been the case since we discovered sticks and stones, but it doesn't mean that CERN is lying when they say they want to focus on non-military areas.
Let's not assume the worst of an institution that's been fairly good for the world so far.
> Let's not assume the worst of an institution that's been fairly good for the world so far.
I'm not assuming the worst. I'm just being realistic, and I think it would be nice if CERN explicitly acknowledged the fact that what they do there could have serious implications for weapons technology.
CERN is explicit about something they know isn't true. They could just say nothing.
I'm fine with CERN, its scientific mission and whatever they come up with there and have contributed to their cause in a minor way so I can do without the lecturing.
If you do research it is easy to stick your head in the ground and pretend that as an academic you have no responsibility for the outcome. But that's roughly analogous to a gun manufacturer pushing the 'guns don't kill people, people do' angle. CERN has a number of projects on the go whose only possible outcome will be more powerful or more compact weapons.
For instance, anti-matter research. If and when we manage to create anti-matter in larger quantities and to be able to do so more easily it will have potentially massive impact on the kind of threats societies have to deal with. To pretend that this is just abstract research is willfully abdicating responsibility.
Once it can be done it will be done, and once it will be done it is a matter of time before it is used. Knowledge, once gained can not be unlearned. See also: the atomic bomb. Now, CERN isn't the only facility where such research takes place and I'm well aware of the geopolitical impact of being 'late' when it comes to such research. I would just like them to be upfront about it. There is a reason why most particle accelerators and associated goodies are funded by the various departments of defense.
Your typical university research lab is not doing stuff with such impact, though, the biology department of some of these are investigating things that can easily be weaponized, and which should come with similar transparency about possible uses.
Human oversight: The use of AI must always remain under human control. Its functioning and outputs must be consistently and critically assessed and validated by a human.
Quite. One would hope, though, that it would be clear to prestigious scientific research organizations in particular, just like everything else related to source criticism and proper academic conduct.
Sure, but the way you maintain this standard is by codifying rules that are distinct from the "lower" practices you find elsewhere.
In other words, because of the huge DOGE clusterfuck demonstrated how horrible practices people will actually enact, you need to put this into the principles.
Oddly enough nowadays CERN is very much like a big corpo, yes they do science, but there is a huge overhead of corpo-like people who running CERN as an enterprise that should bring "income".
Someone's inputs is someone else's outputs, I don't think you have spotted an interesting gap. Certainly just looking at the dials will do for monitoring functioning, but falls well short of validating the system performance.
The real interesting thing is how does that principle interplay with their pillars and goals i.e. if the goal is to "optimize workflow and resource usage" then having a human in the loop at all points might limit or fully erode this ambition. Obviously it not that black and white, certain tasks could be fully autonomous where others require human validation and you could be net positive - but - this challenge is not exclusive to CERN that's for sure.
It's still just a platitude. Being somewhat critical is still giving some implicit trust. If you didn't give it any trust at all, you wouldn't use it at all! So they endorse trusting it is my read, exactly the opposite of what they appear to say!
It's funny how many official policies leave me thinking that it's a corporate cover-your-ass policy and if they really meant it they would have found a much stronger and plainer way to say it
"You can use AI but you are responsible for and must validate its output" is a completely reasonable and coherent policy. I'm sure they stated exactly what they intended to.
If you have a program that looks at CCTV footage and IDs animals that go by.. is a human supposed to validate every single output? How about if it's thousands of hours of footage?
I think parent comment is right. It's just a platitude for administrators to cover their backs and it doesn't hold to actual usecases
I don't see it so bleakly. Using your analogy, it would simply mean that if the program underperforms compared to humans and starts making a large amount of errors, the human who set up the pipeline will be held accountable. If the program is responsible for a critical task (ie the animal will be shot depending on the classification) then yes, a human should validate every output or be held accountable in case of a mistake.
I take an interest in plane crashes and human factors in digital systems. We understand that there's a very human aspect of complacency that is often read about in reports of true disasters, well after that complacency has crept deep into an organization.
When you put something on autopilot, you also massively accelerate your process of becoming complacent about it -- which is normal, it is the process of building trust.
When that trust is earned but not deserved, problems develop. Often the system affected by complacency drifts. Nobody is looking closely enough to notice the problems until they become proto-disasters. When the human finally is put back in control, it may be to discover that the equilibrium of the system is approaching catastrophe too rapidly for humans to catch up on the situation and intercede appropriately. It is for this reason that many aircraft accidents occur in the seconds and minutes following an autopilot cutoff. Similarly, every Tesla that ever slammed into the back of an ambulance on the back of the road was a) driven by an AI, b) that the driver had learned to trust, and c) the driver - though theoretically responsible - had become complacent.
Theoretical? I don't see any reason that complacency is fine in science. If it's a high school science project and you don't actually care at all about the results, sure.
If some dogs chew up an important component, the CERN dog-catcher won't avoid responsibility just by saying "Well, the computer said there weren't any dogs inside the fence, so I believed it."
Instead, they should be taking proactive steps: testing and evaluating the AI, adding manual patrols, etc.
That doesn't follow. Say you write a proof for a something I request, I can then check that proof. That doesn't mean I don't derive any value from being given the proof. A lack of trust does not imply no use.
> Responsibility and accountability: The use of AI, including its impact and resulting outputs throughout its lifecycle, must not displace ultimate human responsibility and accountability.
This is critical to understand if the mandate to use AI comes from the top: make sure to communicate from day 1, that you are using AI as mandated and not increasing the productivity as mandated.
Play it dumb, protect yourself from "if it's not working out then you are using it wrong" attacks.
This corporate crap makes me want to puke. It is a consequence of the forced bureaucracy from European regulations, particularly the EU AI act which is not well thought out and actively adds liability and risk to anyone on the continent touching AI including old school methods such as bank credit scoring systems.
The content is corporate. The EU AI Act is extra judicial. You don't have to be in the EU to adopt this very set of "AI Principles", but if you don't, you carry liability.
‘Sustainability: The use of AI must be assessed with the goal of mitigating environmental and social risks and enhancing CERN's positive impact in relation to society and the environment.’ [1]
‘CERN uses 1.3 terawatt hours of electricity annually. That’s enough power to fuel 300,000 homes for a year in the United Kingdom.’ [2]
I think AI is the least of their problems, seeing as they burn a lot of trees for the sake of largely impractical pure knowledge.
Humans have poured resources into the pursuit of largely impractical pure knowledge for millenia. This has been said of an incredible number of human scientific endeavors, before they found use in other domains.
I presume that this policy is not about building data-centres but about the use of AI by CERN employees, so essentially about marginal cost of generating an additional Python script, or something. Don't know if this calculation ever makes sense on the global scale, but if one’s job is to literally spend energy to produce knowledge, it becomes even less straightforward.
This is a very silly argument. The energy expended should be justified on its own (scientific!) merits. The fact the web happened to be invented at CERN has almost nothing to do with the fact that they burn through terajoules of electricity every year.
All this impractical knowledge people accumulated over centuries gave you cars, planes, computers, air condition, antibiotics, iphones, and, in fact, everything you have when human kind left the trees. So I would rather burn this 1,3 terawatt on this than on, say, running Facebook or bitcoins mining.
These are a good set of principles for any company (or individual) can follow to guide them how they use AI.
How can we train and create AIs with diverse creative viewpoints? The flexibility and creativity of AIs, or lack of, guides proper principles of using AI.
In the long term I am at least certain that AI can emulate anything humans do en masse, where there is training data, but without unguided self evolution, I don't see them solving truly novel problems. They still fail to write coherence code if you go a little out of the training distribution, in my experience, and that is a pretty easy domain, all things considered.
I work in a corporate setting that has been working on a "strategy rebrand" for over a year now and despite numerous meeting, endless powerpoint, and god knows how much money to consultants, I still have no idea what any of this has to do with my work.
Classified project obviously have stricter rules, such as airgaps, but sometimes, the limits are a bit fuzzy, like a non-classified project that supports a classified project. And I may be wrong but academics don't seem to be the type who are good at keeping secrets nor see the security implication of their actions. Which is a good thing in my book, science is about sharing, not keeping secrets! So no AI for military projects could be a step in that direction.
CERN was founded after WW2 in Europe, and like all major European institutions founded at the time, it was meant to be a peaceful institution.
But at least they make everything public knowledge, instead of keeping it secret and only selling it to one nation.
Fixed that for you. That's been the case since we discovered sticks and stones, but it doesn't mean that CERN is lying when they say they want to focus on non-military areas.
Let's not assume the worst of an institution that's been fairly good for the world so far.
You didn't fix anything.
> Let's not assume the worst of an institution that's been fairly good for the world so far.
I'm not assuming the worst. I'm just being realistic, and I think it would be nice if CERN explicitly acknowledged the fact that what they do there could have serious implications for weapons technology.
You're really grasping at straws here. CERN doesn't need to do anything. Nor do universities, for example.
I'm fine with CERN, its scientific mission and whatever they come up with there and have contributed to their cause in a minor way so I can do without the lecturing.
If you do research it is easy to stick your head in the ground and pretend that as an academic you have no responsibility for the outcome. But that's roughly analogous to a gun manufacturer pushing the 'guns don't kill people, people do' angle. CERN has a number of projects on the go whose only possible outcome will be more powerful or more compact weapons.
For instance, anti-matter research. If and when we manage to create anti-matter in larger quantities and to be able to do so more easily it will have potentially massive impact on the kind of threats societies have to deal with. To pretend that this is just abstract research is willfully abdicating responsibility.
Once it can be done it will be done, and once it will be done it is a matter of time before it is used. Knowledge, once gained can not be unlearned. See also: the atomic bomb. Now, CERN isn't the only facility where such research takes place and I'm well aware of the geopolitical impact of being 'late' when it comes to such research. I would just like them to be upfront about it. There is a reason why most particle accelerators and associated goodies are funded by the various departments of defense.
Your typical university research lab is not doing stuff with such impact, though, the biology department of some of these are investigating things that can easily be weaponized, and which should come with similar transparency about possible uses.
In other words, because of the huge DOGE clusterfuck demonstrated how horrible practices people will actually enact, you need to put this into the principles.
And with testing and other services, I guess human oversight can be reduced to _looking at the dials_ for the green and red lights?
It's funny how many official policies leave me thinking that it's a corporate cover-your-ass policy and if they really meant it they would have found a much stronger and plainer way to say it
I think parent comment is right. It's just a platitude for administrators to cover their backs and it doesn't hold to actual usecases
When you put something on autopilot, you also massively accelerate your process of becoming complacent about it -- which is normal, it is the process of building trust.
When that trust is earned but not deserved, problems develop. Often the system affected by complacency drifts. Nobody is looking closely enough to notice the problems until they become proto-disasters. When the human finally is put back in control, it may be to discover that the equilibrium of the system is approaching catastrophe too rapidly for humans to catch up on the situation and intercede appropriately. It is for this reason that many aircraft accidents occur in the seconds and minutes following an autopilot cutoff. Similarly, every Tesla that ever slammed into the back of an ambulance on the back of the road was a) driven by an AI, b) that the driver had learned to trust, and c) the driver - though theoretically responsible - had become complacent.
If some dogs chew up an important component, the CERN dog-catcher won't avoid responsibility just by saying "Well, the computer said there weren't any dogs inside the fence, so I believed it."
Instead, they should be taking proactive steps: testing and evaluating the AI, adding manual patrols, etc.
They endorse limited trust, not exactly a foreign concept to anyone who's taken a closer look at an older loaf of bread before cutting a slice to eat.
This is critical to understand if the mandate to use AI comes from the top: make sure to communicate from day 1, that you are using AI as mandated and not increasing the productivity as mandated. Play it dumb, protect yourself from "if it's not working out then you are using it wrong" attacks.
‘CERN uses 1.3 terawatt hours of electricity annually. That’s enough power to fuel 300,000 homes for a year in the United Kingdom.’ [2]
I think AI is the least of their problems, seeing as they burn a lot of trees for the sake of largely impractical pure knowledge.
[1] https://home.web.cern.ch/news/official-news/knowledge-sharin... [2] https://home.cern/science/engineering/powering-cern
Also, the web was invented at CERN.
Far less power than those projected gigawatt data centers that are surely the one thing keeping AI companies from breaking even.
Their ledgers are balanced just fine for a while.