Posts Tagged ‘morality’

Atrocity

September 26, 2024

I’m eighty two years old and I can’t remember a time in my life when atrocities were not being committed somewhere. I may not have known it at the time but I learned about it later. I also learned later that atrocities were being committed for generations before I was born. I don’t think we need a dictionary definition of “atrocity” . It’s pretty clear that these are moral transgressions against people or peoples reprehensible enough that a majority of us would or should be outraged, and feel that it should be exposed, arrested, prevented.

Unfortunately, the fact that they keep happening and they become larger or more unimaginably horrible is adequate evidence that prevention isn’t possible. it’s not a question of whether we should just live with it. In fact we do live with it and that makes me wonder about the ethical and moral frameworks that underpin our society…globally. We all know that the moral boundaries of different cultures, different societies , are not the same but we assume that certain fundamentals should apply universally.

Don’t we?

Recently there’s been a lot of talk about legal and moral boundaries that should be applied to Artificial Intelligence. , AI. More then fifty years ago scientists were debating about the rules governing robots. It was early days back then and almost no-one was talking about AI but there were a tiny number of thinkers who could foresee a day when computers would become powerful enough to learn by themselves. Somewhere along the line they came up with the ethical guideline that they called the “prime directive” . And it is an interesting philosophical concept that declares that robots should not be made that kill people. I think the underlying principle was that there should only be “good” robots that wouldn’t hurt people. Military funding of robotic research and development kind of put that prime directive in the shade. But AI is far more potent than a robot that welds parts on your car or vacuums your living room. What kind of prime directives should apply to AI and from where would they be derived?

More importantly, how could they be enforced and by whom? Lately scientists and world leaders in the tech fields have been warning that AI may be more dangerous than the nuclear weapons threats we’ve been living with for seventy odd years. The nuclear weapons business is generally state controlled and internationally monitored. AI is neither.

Consider this. You buy something from Amazon or any other on-line outlet. The purchase, even the search without a purchase is recorded and tracked back to your computer and Amazon will use that information to contact you directly to offer similar products . We never stop to think about the complexities of computation, communication and data mining that involves. Now consider the buying habits of hundreds of millions of people becoming data points to be mined by AI. O.K. now think about “the cloud” . What the fuck is the cloud? It’s data storage not confined to your computer or my computer. So where is this cloud and who has access to it. Well people actually pay for access to it so that they extract data for a variety of purposes. But…AI has access to it.

Wait. Wait. Let’s not get all paranoid about this. The fact that AI can learn by itself and the sources of learning are many and varied and include what is stored in the cloud should be a concern but we don’t need to get crazy about it.

Which brings us back to atrocities. What makes atrocities possible is the deterioration and failure of a universal moral framework which identifies and prosecutes and prevents the proliferation of atrocities. When we consider the potential for mischief that is possible with AI…it seems that we need to re-examine the whole idea of universal prime directives and that means looking at our whole socio political and economic structure, globally. How we live and how we relate to each other, how our laws work or don’t work.

If we don’t…it will be our own atrocity.

Blues: 25 09 24