Summary: If you really want to regulate the field of autonomous driving, it would be better to establish – at last – the criminal responsibility of those who produce software and put an end to those shameful clauses of the user licenses that say that “software is as is, and not suitable for use in critical areas”.
Discussing with Prof. Alessandro Cortesi on Linkedin, an interesting debate emerged on the boundaries of legal responsibility for autonomous driving and on the relevance of ethical choices in the instructions to be given to the on-board computer of the vehicle to manage an accident.
Personally, in such a case, I find the use of ethics useless and dangerous.
Ethics is an individual fact which, through the political mediation of representatives of groups that share their own ethics, is translated into legally binding rules. If the State deals personally with ethics, it opens the door to crucifixions, burning and gas chambers.
On the “decision” in case of an accident: it is not the computer that “decides” but the man who programmed it that, (even if only as a possible malice / conscious guilt) takes responsibility for the degrees of autonomy (not decision) left to the software.
It is a fundamental and indispensable point not to transfer the legal consequences of behaviour from people to things.
Automatic driving cannot be allowed in such a way as to violate by default the laws that regulate driviing (conduct which, as it complies with the law, is presumed to be harmless).
The point, if anything, is the management of the extraordinary event (classic pedestrian that suddenly crosses): in this case – once again – the theme is the mal-functioning of the hardware or the bad conception, programming, interaction of the software, neither more nor less than what would happen in case of breakage of another component.
Moreover, when the machine loses control, there is no computer that can oppose the laws of physics.