Musk Right on Robots, Right side Welfare Petrol Advocates Wrong!

  • Thread starter Thread starter 101101
  • Start date Start date
  • Replies Replies 22
  • Views Views 4K
It is worth noting that passenger aircraft generally get round this by using several computers, built to different designs with code produced using different high-level languages, written by separate teams kept isolated from each other. Some even insist on the team members having been educated in different universities in an attempt to eliminate common errors. A 'majority vote', in effect, is taken from them all to control safety critical systems. This is horrendously expensive, of course, and will not be done for cars.

commonsense1.jpg


I know that on the Space Shuttle, they did use five different computers which "voted" on results, to assure accuracy (and eliminate the problem of glitches due to cosmic ray events and other radiation), but my understanding is that those were five identical computers running identical software.

I'd be interested to see if you can come up with an authoritative citation for your assertion above. I will be very surprised if you can; it reads very much like B.S.

There are certain military or aerospace applications that demand software that is developed with a zero tolerance for bugs, producing very reliable software. But of course, as you say, it's still not possible to get 100% reliability in a program a million lines long. It's possible to get close.

Not that this will stop auto makers from making, or people from buying and using, self-driving cars. As I already said:

“The thing to keep in mind is that self-driving cars don’t have to be perfect to change the world. They just have to be better than human beings.” -- Deepak Ahuja, CFO of Tesla Inc.
But, Martin, you have shown yourself to have a highly developed ability to utterly ignore any facts or logic inconvenient to your arguments. As Ronald Reagan said: "Well, there you go again!"
-
 
A 'majority vote', in effect, is taken from them all to control safety critical systems. This is horrendously expensive, of course, and will not be done for cars.

Much more will be spent on self driving because of urban sprawl (even if telecommute helps too) and road damage, and accidental deaths and people with disability and enabling the aged and the public's desire to reduce insurance costs (gambling with socialized excesses) and what Lilium is proposing will just cube this. It's too much of an environmental and economic surplus to avoid. I buy Tony Seba's analysis that it will reduce what the average family spends on transport to 1/10 while increasing the quality and convenience of the experience including no more traffic and no more hunting for parking spaces and no more even having to buy cars really. Plus its possible to put in a kill switch or a manual override that doesn't require one to be James Bond to operate.

My personal preference would be to treat such systems (which are likely to contain unanalysable features such as neural networks and genetically derived algorithms) as we do people, and - after they have been designed - to put them through a 'driving test' This can be done quite safely in simulation and at high speed so many thousands of hours of unexpected situations can be thrown at the system in a relatively short time and the performance of it examined and assessed by human examiners. This at least gets over reliance on a distant code-writer. His work will be examined by independent critical examination.

That is what Waymo is doing and in a way what Tesla is doing. This system will just get better and better for a long time.
 
The point I am making is that you can displace men by machines up to a point. You can do this easily if the task is simple and better still repetitive. Much of this has already been done without the need for much if any 'AI' You design the product and the factory so that production of it consists of easily mechanised repetitive tasks.

The point where it becomes much more difficult is where independent volition is needed. You need a robot which will recognise unexpected problems and act appropriately, and I believe that requires a degree of consciousness. You cannot really expect the designer of a robot to anticipate all possible problems which might occur. Expected problems are easy to handle, and usually don't require AI to solve. You can allow a really dumb robot that just carries heavy loads around to get through doors by just removing the latch and designing the door so it can be barged open for instance.

Machine consciousness is something that we are a million miles away from. We have yet to define in precise unambiguous form what 'consciousness' is, let alone how to achieve it in a machine. For this reason, I suspect 'driverless' cars will remain as they are now and will always require a human to be on standby for situations they cannot handle. Unfortunately, in this world **** happens, and machines are hopeless at handling it. People ARE quite good at handling it

I don't take 'wireless power' very seriously either except for very tiny amounts of power. In the end, for any radiated power you are up against the inverse square law. The best that can be done is many orders of magnitude less than is needed to power a robot. Swapping batteries is better, but I can easily envisage a number of events that would strand the robot away from his stash of charged batteries. So, I am sure, could you.

Clearly you don't know what AI is. Artificial Intelligence is a computerized self-learning system. It excels at dealing with the unknown by trial and error. It may not make the best response to something new the first time but when the desired result is not obtained, it will try again differently until it gets it right. I don't believe current systems are conscious in the sense of self-aware but that doesn't mean they can't solve problems.
 
Back
Top