I have a dog who is as thick as two short planks, but she certainly has curiosity and from that one deduces she has self-awareness too.
Thanks for raising the point about self-awareness in animals. It was a point I was tempted to dive into, in a previous comment, but when we try to cover too much in a single comment, we lose our audience. (I'll probably lose a lot of readers here, since my comments were so lengthy I had to break it into two posts!

)
I have an interest in animal behavior, and I've been fascinated by all the studies over the past few decades about whether or not various animals exhibit self-aware behavior. I think the answer is pretty clear: To a limited degree, some of them do. Perhaps not dogs or cats, but smarter animals such as primates do, and some animal behaviorists argue that some birds do too, as well as some other animals such as dolphins.
So I'm convinced that self-awareness and consciousness are not binary, either/or situations. They are a matter of degree. Therefore, the question of machine intelligence, or machine consciousness, doesn't particularly bother me. Roboticists are trying very hard to get robots to mimic human behavior and thinking, so I don't doubt that they will eventually achieve some degree of mimicking human thought, just as they have already gotten some robots to mimic some limited amount of human behavior. Again, it's not going to be an all-or-nothing thing. They will gradually get machines to exhibit some behaviors (but not others) which we associate with consciousness; and beyond that point roboticists will continue to gradually improve the ability of machines to mimic human thought and human consciousness. And someday, altho probably not until several decades from now, someone will ask a highly advanced machine "Are you actually thinking like people do, or are you just programmed to imitate human thought? And the machine will be able to truthfully respond "A difference which makes no difference,
is no difference." That's the ultimate answer, the final answer, to the Turing Test!
A third book which is even older is Immanuel Kant's "Critique of pure reason" although I warn you it is a very hard read! He makes the case, however, that first, what we can think about is governed by what we can sense. (we were unable to think about bacteria before the microscope was developed) but more importantly, our way of thinking is also governed by what we can sense. In other words, the underlying reality of the universe - the 'sich an ding' (the thing-in-itself) may be bizarrely different from anything we can conceive of. Concepts such as time and causality may turn out to be not real at all, but mere artefacts of our imagination. Reason itself, he argues, may be an illusion which means intelligence may well be an equally false notion.
That's a rather outdated way of looking at things. I'm far more interested in recent studies (or philosophical reasoning; in this subject, there's no clear line between the two) which suggest that what we think of as consciousness is a five-or-more-sensory "movie" -- let's call it a gestalt -- which our brains build up of the outside world, but one which ignores much or most of the sensory input, concentrating on just a small fraction of it which is mostly familiar. This process of filtering out all but the most relevant sensory input allows the brain to build up a gestalt which is mostly based on the individual's previous experiences. Thus the well-known phenomenon that people often see what they
expect to see, so sometimes we don't notice what is really there.
This means we as individuals are not actually interacting with the world as an objective reality, but rather we're interacting with this gestalt, which has some relation or correspondence to reality, but only to a certain degree. Obviously, since we can function in a reality which we don't perceive directly, that degree of correlation to reality must be reasonably strong, at least in certain respects related to our survival, our ability to feed ourselves, and our ability to reproduce and so continue the existence of our species. But our brains do reject an awful lot of what our senses bombard it with, to avoid sensory overload. The gestalt we see is a simplification of the real world. (It may be that some people who suffer from Autism have brains which are not so adept at this sort of simplification; whose brains don't reject the majority of sensory input in favor of small amount which seems most familiar to the brain. It may be that some Autistic people suffer from constant sensory overload. But it would be a mistake to suggest that everyone who suffers from Autism has the same underlying cause.)
However, designing and/or programming a machine to perceive the world using the same gestalt we do... well, I suspect that like many or most engineering goals, that will be an ideal to work toward, rather than something which can be achieved with 100% satisfaction.
Coming back to self-driving cars, there are a number of unsolved ethical problems in the design of these things - even if they work. Whose safety do you prioritise in an emergency for instance? The driver or pedestrians? Should you prioritise the young rather than the old, or simply minimise deaths? Are three likely serious injuries worse than one almost certain death? There are no clear answers to these questions for human drivers, but we can ignore them in people. When you have to build these decisions into software, however, you need clear answers.
As a computer programmer, I regard discussions on this topic to be about as useful as arguing over how many angels can dance on the head of a pin. It has no relevance to any practical matter, such as programming self-driving software.
The goal of a team designing software for a self-driving car will be, in order:
1. Use the car's sensors to build up in the car's computer software a real-time virtual reality 3-D "picture" of the environment (aka SLAM), which the car can use for such things as detecting obstacles (both stationary and moving) and detecting where the traffic lane of the road are. (SLAM stands for Simultaneous Localization And Mapping technology, a process whereby a robot or a device can create a map of its surroundings, and orient itself properly within this map in real time.)
2. Using the SLAM, to avoid collisions with anything larger than (let's say) a small adult cat. (We could of course choose to limit this to larger objects, but ideally the car should not ignore anything large enough to be a fairly small human baby. Running over squirrels may be a small tragedy, but we have to draw the line somewhere.)
3. To obey the rules of the road, both written and "unwritten". (Minimizing abrupt actions which would be unexpected by other drivers will help prevent accidents.)
4. Within reason, to maximize the safety of the journey. (People will of course argue over what is "within reason". They're already complaining about self-driving cars stopping for taco trucks!

)
5. Within the constrictions of #3, to get the vehicle (along with its passengers and cargo) safely to its destination in the shortest amount of time possible. (Speed demons won't like self-driving cars. That's a good thing; we need to get rid of drivers who are speed demons.)
All those armchair philosopher questions, such as asking whether the self-driving car should prioritize the safety of a bus full of children over the lives of the people in the car itself, are utterly irrelevant to the software engineer or computer programmer. Again, we're not dealing with a self-aware machine capable of recognizing what a child (or even a school bus) is; we're dealing with a machined programmed to do the things on that short list above, as reliably as possible. The self-driving car, even a very sophisticated one capable of driving with a 10x or 20x lower accident rate than the average human being,
cannot recognize what a human being is, nor reliably distinguish it from non-human objects! And it certainly won't be able to distinguish between a school bus full of kids and one that is empty (except for the bus driver).
Oh sure, the team could design a self-driving software which would recognize the shape of a typical adult pedestrian walking normally. But what about an amputee on crutches? What about a child on a tricycle? What about a person in a costume which changes the outlines of his/her body? And would a car be able to recognize that a mannequin isn't a real live person? Of course not!
I submit there is no point to trying to prioritize the safety of a general class of "human beings", since self-driving cars won't be able to reliably recognize that class of objects. And anyway, what difference does it really make? Whether it's a deer or a child or some idiot concentrating on his cellphone rather than looking where he's walking, any large object (living or otherwise) which moves out into lanes of traffic is something the car should be programmed to avoid colliding with. It would be ridiculous for the programming team to, for example, program the car to avoid running over a man (or a child chasing a ball) but ignore colliding with a deer. Collision with a deer (or any other large and quite possibly massive) obstacle in the road could result in severe injury to the vehicle and its passengers.
-