Possibility of orthogonal view for a creature?

Possibility of orthogonal view for a creature?

We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Just a pure curiosity: Does there exist a creature with an orthogonal view to the world instead of perspective? What would be an optical explanation for possibility/impossibility?

Many creatures have eyes on the sides of their head where they cannot have both eyes simultaneously look at the same target. For example whales. Chameleons are well known for having independently moving eyes (though they can align them for binocular vision)

Since all the significant processing of vision occurs in the brain's occipital lobes there is no problem with optics.

The answer depends on the distance between eyes, and width of the objects you are viewing.

The view of a separate object is close to the orthogonal view when the width of the object is equal to the distance between your eyes.

Hence the creature that has variable distance between eyes, and has large maximum distance between eyes, has most possibilities to view objects in orthogonal view. Snail comes to mind when I try to think about animal with variable distance between eyes, but its maximum distance between eyes is not that large.

When object is narrower than distance between eyes, you see it in "anti-panoramic view". You will see both left and right side of the object, which is impossible both in orthogonal view and in panoramic view.

Speaking about whole 180 degrees wide visble scene (wider for some animals), the answer is No. You cannot view the whole scene without perspective view, unless you have large number of "eyes" (sensors) that are spread over a large plane. There is no such animal, but you can artificially construct such "perceptor plane" using large number of photocameras.

When looking at the objects of width equal to distance between eyes, if your brain can direct eyes such that their viewing axes are parallel, and could integrated two pictures, then it would view perfectly orthogonal view. This is not what our brain normally does, though. But even with non-parallel view axes of eyes, what we eventually see when brain integrates different pictures the result is not par from orthogonal projection.

A lobe is a part of the brain that is dedicated to a certain function. A common mutation of the brain is the Multi-lobed creature, with duplicate and/or new brain lobes. For articles on particular lobes, see the Brain category

In the Creatures series of games, a neuron is a place where you can store a number value. Most neurons lose the saved value over time, but some do this faster than others. More information about simulated neurons in general is available in Wikipedia's Artificial Neuron article.

Neurons can work together in "neural networks", or neural nets.

Possibility of orthogonal view for a creature? - Biology

Access this article and hundreds more like it with a subscription to Science World magazine.


CCSS: Literacy in Science: 8

TEKS: 6.12A, 7.14A, 8.2E, B.2H, B.6A, B.6H

Can cutting-edge science uncover the true identity of a mysterious beast?

AS YOU READ, THINK ABOUT the type of evidence scientists could collect to help verify the existence of the Loch Ness Monster.

A few years ago, an American tourist peered out over a murky lake in Scotland and spotted something unusual in the water. He later described it as large, dark, and about the length of a school bus. But before he could snap a photo, the odd object disappeared beneath the surface.

The lake the man was visiting was Loch Ness, one of the largest in the United Kingdom. And he wasn’t the first person to spy something mysterious there. For centuries, people have reported seeing strange moving shapes in the lake’s cloudy waters. Many believe they’ve glimpsed an elusive creature known as the Loch Ness Monster, or Nessie for short.

There are plenty of theories about Nessie. Some people believe it’s a plesiosaur, an extinct prehistoric marine reptile that had a long neck. Others speculate that it’s an enormous fish. Or it could simply be a log. Although thousands of people have claimed to see Nessie, no one has been able to prove for certain that a large beast really lurks in the lake.

If there were any scientific evidence to support Nessie’s existence, Neil Gemmell thought he might know how to find it. Gemmell is a biologist at the University of Otago in New Zealand. In 2018, he rounded up an international team of scientists to travel to Scotland. “We set out to answer a simple question: What living things are in Loch Ness?” says Gemmell. “To answer the question, we planned to use a brand-new technology.”

A few years ago, an American visitor looked out over a murky lake in Scotland. He spotted something unusual in the water. Later, he said it was large, dark, and about the length of a school bus. But he didn’t have time to snap a photo. The odd object quickly disappeared beneath the surface.

The man was visiting Loch Ness. That’s one of the largest lakes in the United Kingdom. And he wasn’t the first person to see something unusual there. For centuries, people have claimed to see strange moving shapes in the lake’s cloudy waters. Many believe they’ve spotted a mysterious creature. It’s known as the Loch Ness Monster, or Nessie for short.

People have plenty of theories about Nessie. Some believe it’s a plesiosaur. This extinct prehistoric marine reptile had a long neck. Others think it’s a giant fish. Or it could just be a log. Thousands of people have claimed to have seen Nessie. But no one has been able to definitely prove that a large beast really lives in the lake.

Is there any scientific evidence that Nessie exists? If so, Neil Gemmell thought he might know how to find it. Gemmell is a biologist at the University of Otago in New Zealand. In 2018, he gathered an international team of scientists. They traveled to Scotland. “We set out to answer a simple question: What living things are in Loch Ness?” says Gemmell. “To answer the question, we planned to use a brand-new technology.”

Debate and controversy

In the Gorman article, the author states, “The tools needed to modify organisms are already widely dispersed in industry and beyond.” That could mean the development of an animal like a jabberjay might not be so far-fetched. In fact, a growing “do-it-yourself” biology movement raises concerns about how easy it might be for people outside research laboratories to create harmful micro-organisms.[5]

Activity | Controversy

The Zimmer article raises two key questions:

  • Should a research paper that details how to make an airborne version of the H5N1 avian influenza virus be published?
  • Is the D.I.Y. biology movement a helpful or hurtful movement in science?

We will be using the avian flu example to explore these questions. You will be creating presentations for the National Institutes of Health’s National Scientific Advisory Board for Biosecurity regarding D.I.Y. biology and current research into the avian flu virus.

We will be jigsawing you into different groups in order to represent all the sides in this debate. You will be assigned to one of the following groups:

D.I.Y. Biology Experts
Scientists who support publication
Researchers opposed to research & publication
National Scientific Advisory Board for Biosecurity
Once you have been assigned to one of these groups, click on your group to visit its discussion forum. Here you will find out more about your assignment expectations.

Once our debate is complete, click here to find out if the findings were actually published or not.

Finding the secret to survival

Just how bdelloid rotifers are able to survive such extreme conditions is still a mystery, Mr Malavin says.

"There are different mechanisms, but we still don't understand the whole orchestration."

Tardigrades in space

A bunch of microscopic, virtually indestructible creatures known as tardigrades (or water bears, depending on who you ask) crash landed on the Moon. What are the chances of their survival?

Other animals, such as Antarctic fish, have glycoproteins that produce an antifreeze effect.

These proteins lower the freezing point of fish and allow ice crystals to be filtered through the spleen, said Kerrie Swadling, a marine ecologist at the University of Tasmania's Institute for Marine and Antarctic Studies.

"To date, there has been no antifreeze identified in bdelloid rotifers," said Professor Swadling, who was not involved in the study.

"The jury seems to be out with respect to the mechanism behind their ability to survive freezing."

While more research is needed to figure out how bdelloid rotifers can recover from being frozen in time, looking at their DNA could be a good starting point, said Professor Cavicchioli, who was not involved in the study.

The Brain ‘Rotates’ Memories to Save Them From New Sensations

Research in mice shows that neural representations of sensory information get rotated 90 degrees to transform them into memories. In this orthogonal arrangement, the memories and sensations do not interfere with one another.

Samuel Velasco/Quanta Magazine

Jordana Cepelewicz

During every waking moment, we humans and other animals have to balance on the edge of our awareness of past and present. We must absorb new sensory information about the world around us while holding on to short-term memories of earlier observations or events. Our ability to make sense of our surroundings, to learn, to act and to think all depend on constant, nimble interactions between perception and memory.

But to accomplish this, the brain has to keep the two distinct otherwise, incoming data streams could interfere with representations of previous stimuli and cause us to overwrite or misinterpret important contextual information. Compounding that challenge, a body of research hints that the brain does not neatly partition short-term memory function exclusively into higher cognitive areas like the prefrontal cortex. Instead, the sensory regions and other lower cortical centers that detect and represent experiences may also encode and store memories of them. And yet those memories can’t be allowed to intrude on our perception of the present, or to be randomly rewritten by new experiences.

A paper published recently in Nature Neuroscience may finally explain how the brain’s protective buffer works. A pair of researchers showed that, to represent current and past stimuli simultaneously without mutual interference, the brain essentially “rotates” sensory information to encode it as a memory. The two orthogonal representations can then draw from overlapping neural activity without intruding on each other. The details of this mechanism may help to resolve several long-standing debates about memory processing.

To figure out how the brain prevents new information and short-term memories from blurring together, Timothy Buschman, a neuroscientist at Princeton University, and Alexandra Libby, a graduate student in his lab, decided to focus on auditory perception in mice. They had the animals passively listen to sequences of four chords over and over again, in what Buschman dubbed “the worst concert ever.”

These sequences allowed the mice to establish associations between certain chords, so that when they heard one initial chord versus another, they could predict what sounds would follow. Meanwhile, the researchers trained machine learning classifiers to analyze the neural activity recorded from the rodents’ auditory cortex during these listening sessions, to determine how the neurons collectively represented each stimulus in the sequence.

Buschman and Libby watched how those patterns changed as the mice built up their associations. They found that over time, the neural representations of associated chords began to resemble each other. But they also observed that new, unexpected sensory inputs, such as unfamiliar sequences of chords, could interfere with a mouse’s representations of what it was hearing — in effect, by overwriting its representation of previous inputs. The neurons retroactively changed their encoding of a past stimulus to match what the animal associated with the later stimulus — even if that was wrong.

The researchers wanted to determine how the brain must be correcting for this retroactive interference to preserve accurate memories. So they trained another classifier to identify and differentiate neural patterns that represented memories of the chords in the sequences — the way the neurons were firing, for instance, when an unexpected chord evoked a comparison to a more familiar sequence. The classifier did find intact patterns of activity from memories of the actual chords that had been heard — rather than the false “corrections” written retroactively to uphold older associations — but those memory encodings looked very different from the sensory representations.

The memory representations were organized in what neuroscientists describe as an “orthogonal” dimension to the sensory representations, all within the same population of neurons. Buschman likened it to running out of room while taking handwritten notes on a piece of paper. When that happens, “you will rotate your piece of paper 90 degrees and start writing in the margins,” he said. “And that’s basically what the brain is doing. It gets that first sensory input, it writes it down on the piece of paper, and then it rotates that piece of paper 90 degrees so that it can write in a new sensory input without interfering or literally overwriting.”

In other words, sensory data was transformed into a memory through a morphing of the neuronal firing patterns. “The information changes because it needs to be protected,” said Anastasia Kiyonaga, a cognitive neuroscientist at the University of California, San Diego who was not involved in the study.

This use of orthogonal coding to separate and protect information in the brain has been seen before. For instance, when monkeys are preparing to move, neural activity in their motor cortex represents the potential movement but does so orthogonally to avoid interfering with signals driving actual commands to the muscles.

Still, it often hasn’t been clear how the neural activity gets transformed in this way. Buschman and Libby wanted to answer that question for what they were observing in the auditory cortex of their mice. “When I first started in the lab, it was hard for me to imagine how something like that could happen with neural firing activity,” Libby said. She wanted to “open the black box of what the neural network is doing to create this orthogonality.”

Experimentally sifting through the possibilities, they ruled out the possibility that different subsets of neurons in the auditory cortex were independently handling the sensory and memory representations. Instead, they showed that the same general population of neurons was involved, and that the activity of the neurons could be divided neatly into two categories. Some were “stable” in their behavior during both the sensory and memory representations, while other “switching” neurons flipped the patterns of their responses for each use.

To the researchers’ surprise, this combination of stable and switching neurons was enough to rotate the sensory information and transform it into memory. “That’s the entire magic,” Buschman said.

In fact, he and Libby used computational modeling approaches to show that this mechanism was the most efficient way to build the orthogonal representations of sensation and memory: It required fewer neurons and less energy than the alternatives.

Buschman and Libby’s findings feed into an emerging trend in neuroscience: that populations of neurons, even in lower sensory regions, are engaged in richer dynamic coding than was previously thought. “These parts of cortex that are lower down in the food chain are also fitted out with really interesting dynamics that maybe we haven’t really appreciated until now,” said Miguel Maravall, a neuroscientist at the University of Sussex who was not involved in the new study.

The work could help reconcile two sides of an ongoing debate about whether short-term memories are maintained through constant, persistent representations or through dynamic neural codes that change over time. Instead of coming down on one side or the other, “our results show that basically they were both right,” Buschman said, with stable neurons achieving the former and switching neurons the latter. The combination of processes is useful because “it actually helps with preventing interference and doing this orthogonal rotation.”

Buschman and Libby’s study might be relevant in contexts beyond sensory representation. They and other researchers hope to look for this mechanism of orthogonal rotation in other processes: in how the brain keeps track of multiple thoughts or goals at once in how it engages in a task while dealing with distractions in how it represents internal states in how it controls cognition, including attention processes.

“I’m really excited,” Buschman said. Looking at other researchers’ work, “I just remember seeing, there’s a stable neuron, there’s a switching neuron! You see them all over the place now.”

Libby is interested in the implications of their results for artificial intelligence research, particularly in the design of architectures useful for AI networks that have to multitask. “I would want to see if people pre-allocating neurons in their neural networks to have stable and switching properties, instead of just random properties, helped their networks in some way,” she said.

All in all, “the consequences of this kind of coding of information are going to be really important and really interesting to figure out,” Maravall said.

6 Answers 6

I am not sure that orthogonality can serve as useful or valid metric in case of general purpose high-order languages like C#, because it requires the distinction of "operations" and "operands" -- the small parts of the language that are not easily distinguishable in such high-order languages like C#.

My understanding of orthogonality is based upon the Assembler language where the orthogonality of the instruction set of a certain particular CPU or microcontroller indicated whether there are some constrains on operations performed by this CPU or controller depending upon the data types. In early times this was important because not every CPU supported operations on fractional numbers or numbers of different length etc.

In this respect I would rather check for the orthogonality of the Common Intermediate Language using Stack Machine language as the target for C# compiler, not C# itself.

If you are really interested in orthogonality of C# and I am not mistaken here (for whatevery purpose) I would suggest looking towards some genetic programming algorithms. You can use those to generate different programs from the given set of keywords (even the meaningless ones) and you can just automatically check if those are compilable. This would help you to automatically see what elements of the language can be combined together and derive some aspects of your orthogonality metric.

The term "orthogonality" is a layman's term for a precise mathematical notion: the language terms form an initial algebra (look it up in Wikipedia).

It basically means "there is a 1-1 correspondence between syntax and meaning". This means: there is exactly one way to express things, and, if you can put some expression in a particular place, then you can put any other expression there too.

Another way to think about "orthogonal" is that the syntax obeys the substitution principle. For example if you have a statement with a slot for an expression, then any expression can be put there and the result is still a syntactically valid program. Furthermore, if you replace

I want to emphasise that "meaning" does not imply computational result. Clearly, 1 + 2 and 2 + 1 both equal 3. However the terms are distinct, and imply a different calculation even if it has the same result. The meaning is different, just as two sort algorithms are different.

You may have heard of "abstract syntax tree" (AST). The word "abstract" here means precisely "orthogonal". Technically most AST's are not in fact abstract!

Perhaps you have heard of the "C" programming language? C type notation is not abstract. Consider:

So here is a function declaration returning type int . The type of a pointer to this function is given by:

Note, you cannot write the type of the function! C type notation sucks bigtime! It isn't abstract. It isn't orthogonal. Now, suppose we want to make a function which accepts the above type instead of int:

All ok .. but .. what if we want to return it instead:

Woops! Invalid. Lets add parens:

Woops! That doesn't work either. We have to do this (it's the only way!):

Now its OK, but having to use a typedef here is bad. C sucks. It isn't abstract. It isn't orthogonal. Here's how you do this in ML, which is:

We condemn C at the syntax level.

Ok, now lets flog C++. We can fix the stupidity above with templates and get an ML like notation (more or less):

but the actual type system is fundamentally flawed by references: if T is a type, then is T& a type? The answer is waffly: at the syntax level, if you have a type U = T&, then U& is allowed but it just means T&: a reference to a reference is the original reference. This sucks! It breaks the uniqueness requirement semantically. Worse: T& & is not allowed syntactically: this breaks the substitution principle. So C++ references break orthogonality in two different ways, depending on the binding time (parsing or type analysis). If you want to understand how to do this right .. there's no problem with pointers!

Almost no real languages are orthogonal. Even Scheme, which pretends great clarity of expression, isn't. However many good languages can be judged to have a "reasonably close to orthogonal feature basis" and that is a good recommendation for a language, applied both to the syntax and to the underlying semantics.

In Search For Cures, Scientists Create Embryos That Are Both Animal And Human

A handful of scientists around the United States are trying to do something that some people find disturbing: make embryos that are part human, part animal.

The researchers hope these embryos, known as chimeras, could eventually help save the lives of people with a wide range of diseases.

One way would be to use chimera embryos to create better animal models to study how human diseases happen and how they progress.

Perhaps the boldest hope is to create farm animals that have human organs that could be transplanted into terminally ill patients.

But some scientists and bioethicists worry the creation of these interspecies embryos crosses the line. "You're getting into unsettling ground that I think is damaging to our sense of humanity," says Stuart Newman, a professor of cell biology and anatomy at the New York Medical College.

The experiments are so sensitive that the National Institutes of Health has imposed a moratorium on funding them while officials explore the ethical issues they raise.

Nevertheless, a small number of researchers are pursuing the work with alternative funding. They hope the results will persuade the NIH to lift the moratorium.

"We're not trying to make a chimera just because we want to see some kind of monstrous creature," says Pablo Ross, a reproductive biologist at the University of California, Davis. "We're doing this for a biomedical purpose."

The NIH is expected to announce soon how it plans to handle requests for funding.

Recently, Ross agreed to let me visit his lab for an unusual look at his research. During the visit, Ross demonstrated how he is trying to create a pancreas that theoretically could be transplanted into a patient with diabetes.

The first step involves using new gene-editing techniques to remove the gene that pig embryos need to make a pancreas.

Working under an elaborate microscope, Ross makes a small hole in the embryo's outer membrane with a laser. Next, he injects a molecule synthesized in the laboratory to home in on and delete the pancreas gene inside. (In separate experiments, he has done this to sheep embryos, too.)

After the embryos have had their DNA edited this way, Ross creates another hole in the membrane so he can inject human induced pluripotent stem cells, or iPS for short, into the pig embryos.

Like human embryonic stem cells, iPS cells can turn into any kind of cell or tissue in the body. The researchers' hope is that the human stem cells will take advantage of the void in the embryo to start forming a human pancreas.

Because iPS cells can be made from any adult's skin cells, any organs they form would match the patient who needs the transplant, vastly reducing the risk that the body would reject the new organ.

But for the embryo to develop and produce an organ, Ross has to put the chimera embryos into the wombs of adult pigs. That involves a surgical procedure, which is performed in a large operating room across the street from Ross's lab.

Pablo Ross of the University of California, Davis inserts human stem cells into a pig embryo as part of experiments to create chimeric embryos. Rob Stein/NPR hide caption

Pablo Ross of the University of California, Davis inserts human stem cells into a pig embryo as part of experiments to create chimeric embryos.

The day Ross opened his lab to me, a surgical team was anesthetizing an adult female pig so surgeons could make an incision to get access to its uterus.

Ross then rushed over with a special syringe filled with chimera embryos. He injected 25 embryos into each side of the animal's uterus. The procedure took about an hour. He repeated the process on a second pig.

Every time Ross does this, he then waits a few weeks to allow the embryos to develop to their 28th day — a time when primitive structures such as organs start to form.

Ross then retrieves the chimeric embryos to dissect them so he can see what the human stem cells are doing inside. He examines whether the human stem cells have started to form a pancreas, and whether they have begun making any other types of tissues.

The uncertainty is part of what makes the work so controversial. Ross and other scientists conducting these experiments can't know exactly where the human stem cells will go. Ross hopes they'll only grow a human pancreas. But they could go elsewhere, such as to the brain.

"If you have pigs with partly human brains you would have animals that might actually have consciousness like a human," Newman says. "It might have human-type needs. We don't really know."

That possibility raises new questions about the morality of using the animals for experimentation. Another concern is that the stem cells could form human sperm and human eggs in the chimeras.

"If a male chimeric pig mated with a female chimeric pig, the result could be a human fetus developing in the uterus of that female chimera," Newman says. Another possibility is the animals could give birth to some kind of part-human, part-pig creature.

"One of the concerns that a lot of people have is that there's something sacrosanct about what it means to be human expressed in our DNA," says Jason Robert, a bioethicist at Arizona State University. "And that by inserting that into other animals and giving those other animals potentially some of the capacities of humans that this could be a kind of violation — a kind of, maybe, even a playing God."

Ross defends what his work. "I don't consider that we're playing God or even close to that," Ross says. "We're just trying to use the technologies that we have developed to improve peoples' life."

Still, Ross acknowledges the concerns. So he's moving very carefully, he says. For example, he's only letting the chimera embryos develop for 28 days. At that point, he removes the embryos and dissects them.

If he discovers the stem cells are going to the wrong places in the embryos, he says he can take steps to stop that from happening. In addition, he'd make sure adult chimeras are never allowed to mate, he says.

"We're very aware and sensitive to the ethical concerns," he says. "One of the reasons we're doing this research the way we're doing it is because we want to provide scientific information to inform those concerns."

Ross is working with Juan Carlos Izpisua Belmonte from the Salk Institute for Biological Studies in La Jolla, Calif., and Hiromitsu Nakauchi at Stanford University. Daniel Garry of the University of Minnesota and colleagues are conducting similar work. The research is funded in part by the Defense Department and the California Institute for Regenerative Medicine (CIRM).

Nanobot micromotors deliver medical payload in living creature for the first time

Researchers working at the University of California, San Diego have claimed a world first in proving that artificial, microscopic machines can travel inside a living creature and deliver their medicinal load without any detrimental effects. Using micro-motor powered nanobots propelled by gas bubbles made from a reaction with the contents of the stomach in which they were deposited, these miniature machines have been successfully deployed in the body of a live mouse.

The picayune robots used in the research were tubular, about 20 micrometers long, 5 micrometers in diameter, and coated in zinc. Once the mouse ingested these tiny tubes and they reached the stomach, the zinc reacted with the hydrochloric acid in the digestive juices to produce bubbles of hydrogen which then propelled the nanobots along like miniature rockets.

Reaching speeds of up to 60 micrometers per second, the nanobots headed outwards toward the stomach lining where they then embedded themselves, dissolved, and delivered a nanoparticle compound directly into the gut tissue.

According to the researchers, of all the nanobots deployed in the stomach of the mouse, those that reached the stomach walls remained attached to the lining for a full 12 hours after ingestion, thereby proving their effectiveness and robust nature.

Further, after the mouse was eventually euthanized and the stomach was dissected and examined, the presence of the nanobots also showed no signs of raised toxicity levels or tissue damage. According to the researchers this was in line with their expectations, particularly given that zinc is effectively also a multipurpose nutrient.

While nanobots have been used before on organic tissue – such as in the destruction of the Hepatitis C virus – and still others have been designed to be propelled using external forces within a living creature, the University of California micromachines are the very first self-propelled, nanoparticle delivering nanobots ever. And it is this fact that makes the research team believe that its success so far merits further research and cites the fact that this is now the beginning of a proven method to deliver targeted drug administration.

For everyone else, this is exciting technology that may well help to medically treat human beings in the not-too-distant future. Of course, this is early days in this research and a plethora of continuously successful tests will need to be run before it can even be considered by the likes of the US Food and Drug Administration to approve its use in people. But these first steps are vital in what may one day be a commonplace, targeted, and safe alternative to traditional high-dose medications.

No announcement has been made regarding further tests or the possibility of human-based trials.

An Evolutionary Anthropological Perspective on Modern Human Origins

Modern humans are an anomaly in evolution, and the final key features occurred late in human evolution. Ultimate explanations for this evolutionary trajectory are best attained through synthetic studies that integrate genetics, biological anthropology, and archaeology, all resting firmly in the field of evolutionary anthropology. These fields of endeavor typically operate in relative isolation. This synthetic overview identifies the three pillars of human uniqueness: an evolved advanced cognition, hyperprosociality, and psychology for social learning. These properties are foundational for cumulative culture, the dominant adaptation of our species. Although the Homo line evolved in the direction of advancing cognition, the evidence shows that only modern humans evolved extreme levels of prosociality and social learning this review offers an explanation. These three traits were in place ∼200–100 ka and produced a creature capable of extraordinary social and technological structures, but one that was also empowered to make war in large groups with advanced weapons. The advance out of Africa and the annihilation of other hominin taxa, and many unprepared megafauna, were assured.

Watch the video: ANUNNAKI sau Cei Veniti din LUMINA DIVINA. Cine sunt si cum i-au AJUTAT pe oameni (May 2022).