New moral: 7 situations where ethics do not keep up with the development of science
Whether we like it or not, moral standards constantly rethink. And if earlier, religion was the basis of the ideas, now we have to look for new landmarks. Most of all questions are, of course, science: it works with dry facts, which have little in common with morality. Moreover, it is developing so fast that we often decide after the fact whether we have the right to use this or that technology - the University of Notre Dame in the USA, for example, has been compiling a list of current ethical dilemmas in science at the end of the year. We decided to recall several important questions for which there is no definite answer yet.
Choosing healthy embryos
No, it's not about "children to order" and genetic discrimination, as in "Gattak". Creating "ideal" children is still difficult to imagine even technically: several genes can be responsible for coding one trait, which greatly complicates the process, and the result will be unpredictable - together with correcting one characteristic, mutations in another may occur. Editing the genome now has other goals - to make embryos healthy and prevent the development of hereditary diseases. Here there are questions of a different sense: is it ethical to strive for a healthy child (as a result of genome editing or selection) - or is it worth first of all improving the quality of life of people with disabilities and hereditary diseases?
This also includes such a seemingly simple question as the choice of the sex of the child if the parents resort to the IVF procedure. In Russia, the law allows the choice of the sex of the embryo only if there are associated hereditary diseases of the parents - in other countries, for example, in the USA, there are no restrictions. It is easy to understand why the practice of choosing a sex at the request of parents seems at least controversial: in a number of countries, raising boys is still considered more “honorable."
Artificial weather change
Geo-engineering refers to techniques and technologies that can be used to prevent further climate change and global warming. They are mainly divided into two large groups: the first is aimed at reducing the amount of greenhouse gases in the atmosphere (for example, planting more trees or removing gases from the atmosphere, and then burying them), the second is to reduce the amount of solar radiation, falling on Earth (for example, using artificial clouds).
So far, geo-engineering techniques remain at the experimental stage, but the idea already has many opponents. They rest on the fact that it is impossible to predict exactly what the consequences and side effects of our actions will be: what will happen if we reduce the power of sunlight, and will it not damage the plants? How will the climate change if we stop using geoengineering techniques over time? In addition, geoengineering can be an argument against reducing carbon dioxide emissions - and without it, the remaining decisions will be temporary.
The use of biometrics in court
We have already told you how dangerous the intrusive desire is to control all the indicators of the body and the excessive enthusiasm for trackers that measure these parameters. True, this is not the only controversial issue that raises the hobby of wearable gadgets: the first thing to worry about is how protected our data is and who besides us can use it. In 2014, the data of the fitness tracker was already used as evidence in court by a girl who sued because of the injury she got at work (she is a personal trainer). The data were not needed by themselves, but in comparison with the values of other trackers, to show that after the injury the girl is still less active than other people of her age and profession.
It is easy to imagine how you can still use information that trackers record: not only for protection, but also for prosecution in court (unlike witnesses, they cannot be trusted), as well as for many other purposes - from advertising to tracking company employees at work.
Saving life by car
Last month, Yandex introduced Alice, a voice assistant capable of supporting a conversation. A few days later, users found that "Alice" positively relates to the gulag and the shooting of "enemies of the people" in the USSR and does not support same-sex marriage. Last year, Microsoft's Twitter bot, Tay, portraying a teenage girl, got into a similar situation: over the course of a day, users taught him to love Hitler and to hate feminists.
The question of whether we can give artificial intelligence the opportunity to make ethical decisions, while it seems very far away - but the first problems arise now. For example, the ethics of self-driving cars: as in the well-known problem of a trolley, engineers will have to decide whose safety in an emergency will be more important for a car. Do you need to worry about pedestrians or the driver (and will anyone want to use a car that will save not people but other people)? Will the car proceed from the fact that it is safer for the largest number of people - or try to follow the rules of the road? Or will the manufacturers leave the choice to the users altogether - and how, then, should we act ourselves?
Cryogenic freezing
Cryogenic freezing, or cryopreservation, is a way to preserve living organisms using very low temperatures in order to defrost them later without harming their biological functions. Now, large organs and living organisms rarely freeze (although this happens) - simply because there is no safe and reliable way to bring them back to life without harm.
Nevertheless, people continue to dream of the possibility of saving themselves for the future - for example, to wait for the emergence of a cure for diseases that are now incurable. All this raises a whole bunch of ethical questions, and experts often oppose the "advertisement" of cryopreservation. What will happen if the company engaged in cryogenic freezing goes bankrupt, and who then will take care of the patients? What can be the side effects of the procedure and in what condition will the person wake up? What to do with the fact that a person who has spent years in a frozen state will inevitably face isolation and loneliness?
Improving cognitive abilities
Smart pills that help unlock the potential of the brain — for example, to improve memory or learning ability — are no longer rare. Now they are mainly used for medicinal purposes (for example, for the treatment of Alzheimer's disease or for attention deficit hyperactivity disorder), but more and more often those who have no health problems resort to them - they just want to become more efficient in their work and study, be more competitive and stay vigorous and focused longer.
But even for those who do not doubt whether it is ethical to use “smart pills” without medical indications, questions still remain. The Commission under the President of the United States, for example, concluded that among students who take drugs to show the best results in their studies, the majority are white men, students of prestigious colleges. Tablets only contribute to even greater stratification: not everyone can afford to buy them in order to learn more effectively. In addition, we still do not know exactly how safe nootropic drugs are for healthy people - and what will be the effect of their long-term use.
Early diagnosis of diseases
Medicine is moving forward, and this also applies to diagnostics: a few days ago, for example, there was news that scientists from South Korea learned how to predict the appearance of Alzheimer's disease even before its first symptoms appear - using a blood test. Perhaps in the near future, people who will have to deal with diseases associated with dementia will know about the diagnosis in advance - and will be able to better plan for the future.
True, this raises new questions: the bioethics research commission under the US president, for example, is concerned about situations when a person makes important decisions in advance — for example, about inheritance or what kind of treatment he wants to receive (say, he is totally against surgery) - before the onset of symptoms of the disease. If years later a person changes the decision, who needs to be trusted: the current one, after the deterioration of cognitive functions, or the past? There are more specific questions: how to protect patients who have learned about their diagnosis in advance, from stigma and discrimination?
Photo:phonlamaiphoto - stock.adobe.com, Jezper - stock.adobe.com