Words do Matter

Words do matter, and that’s not just a philosophical issue.

Smart contracts are not contract, cryptocurrencies are not legal tenders and Artificial Intelligence is very artificial and definitely not intelligent.

A poor understanding of the meaning of the words leads to confuse thinking and negatively affects the lawmaking process

The CIA to provide evidence of HwaWei involvement with Chinese Military and Intelligence

Even if true, where is the beef? That HwaWei got funded by Chinese military and intelligence is not an issue. Do we already forgot the “Fritz Chip”, the use of (Western-sponsored) State malware in intelligence and criminal investigation and so on?

From a national security and public policy perspective, it is ? logic that a sovereign state explores all the possibilities to obtain superiority over its foes and “friend” too. Thus – if confirmed – the proof offered by CIA of HwaWei involvement with the national security apparatus shouldn’t surprise. Continue reading “The CIA to provide evidence of HwaWei involvement with Chinese Military and Intelligence”

If software were a military weapon

Software manufacturing is often compared to car building, and there are plenty of such analogies available, ranging from jokes to serious analysis.

A less considered match is the manufacturing of military weapons in contrast to sport weapons.

The history of the US Army contest that led Beretta to a winning over the German-Swiss Sig Sauer, thus securing the Italian company a rich supply contract of the “92” (renamed “M9” in the US Army naming system) is revealing.

The M9 was “the” most reliable gun in the market, being able to fire thousands of bullets without malfunctions, though enough to stand against the harshest environmental conditions and easy to both operate and maintains. Soldiers could rely upon this weapon to have the job done and not being let alone in critical moments.

How many software (from firmware, to operating systems, to platforms) are built like a Beretta M9?

Autonomous-driving and liability: a brief taxonomy

Summary: If you really want to regulate the field of autonomous driving, it would be better to establish – at last – the criminal responsibility of those who produce software and put an end to those shameful clauses of the user licenses that say that “software is as is, and not suitable for use in critical areas”.

Discussing with Prof. Alessandro Cortesi on Linkedin, an interesting debate emerged on the boundaries of legal responsibility for autonomous driving and on the relevance of ethical choices in the instructions to be given to the on-board computer of the vehicle to manage an accident.

Personally, in such a case, I find the use of ethics useless and dangerous.

Ethics is an individual fact which, through the political mediation of representatives of groups that share their own ethics, is translated into legally binding rules. If the State deals personally with ethics, it opens the door to crucifixions, burning and gas chambers.

On the “decision” in case of an accident: it is not the computer that “decides” but the man who programmed it that, (even if only as a possible malice / conscious guilt) takes responsibility for the degrees of autonomy (not decision) left to the software.

It is a fundamental and indispensable point not to transfer the legal consequences of behaviour from people to things.

Automatic driving cannot be allowed in such a way as to violate by default the laws that regulate driviing (conduct which, as it complies with the law, is presumed to be harmless).

The point, if anything, is the management of the extraordinary event (classic pedestrian that suddenly crosses): in this case – once again – the theme is the mal-functioning of the hardware or the bad conception, programming, interaction of the software, neither more nor less than what would happen in case of breakage of another component.

Moreover, when the machine loses control, there is no computer that can oppose the laws of physics.

We are all beta-tester… Still.

I firmly disagree with ? David Pogue’s Scientific American column dating back to November, 2014 where the journalist wrote:

 ?Part of our disgruntlement at being served flawed software probably stems from our conception of software itself-as something that is, in fact, finishable. Software used to come in boxes, bearing version numbers. We understood each as a milestone-a program frozen in stone.

But nowadays software is a living, constantly evolving entity. Consider phone apps: nobody seems to mind that new versions pour out constantly, sometimes many times a year. Or Web sites: they’re software, too, and they’re perpetually changing.

Maybe that’s why Adobe no longer produces boxed, numbered versions of Photoshop; instead the only way to get Photoshop is to subscribe to its steady evolution all year long.

Maybe it’s time to stop thinking about traditional programs any differently. Maybe we should get rid of frozen, numbered editions, much as Adobe has done.

That wouldn’t eliminate the frustration of bugginess, but at least we would comprehend software’s true nature: a product that is never finished.

The fact that software is an ever-evolving product (and not – as we in the EU say, a “copyrighted work”) doesn’t imply that it is fair to put on the market ? a piece of crap, telling peopole that “we’ll clean the toilet with the next version”. Because in the meantime, the stink… stinks.