We are searching data for your request:
Upon completion, a link will appear to access the found materials.
First an observation - I noticed that I can not inflate my cheeks with my mouth open. This means that inflating cheeks is not a muscle property and that only pressure of air from lungs trapped inside my closed mouth is responsible for any inflation. Also, once inflated I can move my tongue anywhere and still retain the inflation. This means that tongue is also not responsible for air regulation inside mouth.
Moving on, we all know that we can either inflate both our cheeks, or we can inflate only cheek from one side or we can inflate *portion of face between lips and chin* (I don't know what it is actually called). Put simply, we have complete control over which part of my face would I like to inflate.
I don't understand how this is possible without muscle control? Air has tendency to occupy all space therefore my entire cheeks should get inflated without me having any control whatsoever, but as it turns out I can control and redirect the air to apply pressure only at a particular cheek without any help from tongue. How?
Muscles can only "pull" never "push" - that's just how they are organized at a molecular level. We can only push things because we use joints as levers: if you "pull" on the outside surface of your arms, you can push things out with your hands because that pull acts to extend the forearms.
It's not really possible to puff out your cheeks with muscle, because there is nothing floating in the space outside your face to pull against. Your cheeks puff up a bit when you smile, but I think you will agree that's a completely different type of "puffing" and comes from gathering up a bunch of tissue.
However, it's fairly simple to contract muscles in your face (specifically the buccinator) to prevent a cheek from bulging out by tightening the muscles in the cheek. The air pressure you are producing is pretty minimal, so it doesn't take much strength. The cheek bulges out when the muscles are relaxed, and doesn't bulge when the muscles are contracted.
While it's true that we don't have facial muscles to actively move our cheeks, we DO have muscles to put them under tension (I don't which specific ones that are though).
If we apply both air pressure from our lungs and tensions to (selected) cheeks, we can thereby control which regions to inflate.
As an anecdotal note: building up and using this tension is especially important for musicians playing wind instruments
My brother’s fraternity has a house dog named Ruby and she is a golden retriever mix. She is one of the cutest dogs I’ve ever seen (my dog at home is the cutest dog) and I always love petting her whenever I see her and think she is so cute! Then I started to wonder, I think most and if not all animals are cute, why do I think this? Why do I think a cat is cute but other people can look at the same cat and think it’s ugly? Is there a scientific reason behind it?
According to this, the answer lies under evolutionary biology. It is in our nature to find things cute. Konrad Lorenz, and Austrian scientist studied why we humans think things are cute. He came up with kinderschema which are the sets of traits we find cute and adorable. These traits are “large head relative to body size, rounded head large, protruding forehead large eyes relative to face, eyes below midline of head rounded, protruding cheeks rounded body shape and soft, elastic body surfaces,” according to Irenaus Eibl-Eibesfedlt, the founder of human ethology. This gives more information on Lorenz and his studies.
Not only do we enjoy looking at cute little animals, but according to this article cute things bring out our aggression. This article talks about a study that was done where 109 participants looked at pictures of puppies and they agreed with the statement “I can’t handle it!” and also said that they felt like squeezing something. It was founded that the cuter the animal was the more aggressive the response. Rebecca Dyer, an alumni from Yale University, calls this phenomenon “cute aggression.” There was a second researchers conducted to see if the aggression was only verbally but not physical as well. People saw a slideshow of cute animals and popped 120 bubbles whereas seeing a slideshow that showed neutral pictures, only 80-100 bubbles were popped. Rebecca Dyer suggests we might have so much pent-up aggression over seeing something cute is because are immediate is response is to take care of it. That might be why if you see a cute puppy your first reaction is to want to grab and snuggle the puppy.
However, this site explains how looking at cute animal pictures can help you concentrate. A study was conducted at Hiroshima University in Japan. The experiment involved looking at certain pictures and then completing a concentration task which was finding a given number in a random sequence of numbers, or playing the game Operation. The pictures used were either pictures of food or pictures of puppies and kittens. After looking at cute baby animal pictures, performance in the Operation game increased by 44%, however, it also took 12% longer to complete the task. The numbers task also saw an 16% in accuracy.
The Picture on the right is my sister’s puppy, Sox. Do you feel like he is helping your concentration at all? Do you find him cute or have the urge to grab him?
Here is a video from a Youtuber Vsauce who talks about why we find animals cute in case you were more interested!
Why we look at pretty faces
Few visual impressions can be compared to humans' interest for faces. New research suggests that our brain rewards us for looking at pretty faces.
A quick glimpse of a face provides us with rich information about the person in front of us. Are we acquainted? Man or woman? Happy or angry? Attractive?
In her PhD thesis, conducted at the Department of Psychology, University of Oslo, Olga Chelnokova has explored how our visual system is able to direct attention to the most important information in a face. Her study suggest that evolution has made us experts on faces.
"We are very curious about others' faces, we read stories in them and evaluate their aesthetic value," says Chelnokova.
Couldn't stop looking
Together with colleagues from the research group Hedonic Pharmacology lab she revealed that the brain reward system -- a cluster of regions deep in our brain -- is involved in our evaluation of other people's attractiveness.
"The reward system is involved in generating the experience of pleasure when, for instance, we enjoy tasty food or happen to win a lottery. It turns out that the same system is also engaged in creating the feelings of pleasure when we look at a pretty face," she says.
Previous research has shown a high level of agreement between people when it comes to evaluating facial attractiveness. In the current study, the scientists let participants view images of faces pre-rated as most, intermediate, or less attractive. This was done after participants received a small dose of morphin, a drug that stimulates the reward system.
"Participants rated the most attractive faces as even more attractive, and were willing to do more presses on button that let them look at the picture for a longer time. They also spent more time looking at the eyes of the people in the pictures. Importantly, we observed the opposite behaviors when we blocked the reward system with another drug, such that, for instance, our participants gave lower ratings to the most attractive faces," explains Chelnokova.
The researchers saw no effect from the drugs when participants viewed images of intermediate or less attractive faces.
Theory of evolution
Is it possible that the human brain has evolved to reinforce behaviors that are evolutionary favorable for us as a species? It very well could have, according to the scientists.
"Previous research has established links between facial attractiveness and several factors important for the evolutionary propagation of our species, such as health and good reproductive potential. We can speculate that there is an evolutionary reason behind our brain enjoying to look and wanting to look more at an attractive face," says Olga Chelnokova.
She emphasizes though that the reward system gives an immediate response, an extra pleasure, but that the system's response does not determine the path for our behavior in the long term.
"For instance, we cannot eat chocolate all the time because it is not healthy. Similarly, there are many factors that contribute to a good relationship much more than facial attractiveness. But we learn more about other qualities as we get to know the other person better."
Seeking eye contact
Another study in her thesis let participants look at three-dimensional pictures of faces while tracking their eye movements. The scientists recorded which parts of the face participants looked at when asked to recognize the faces. Participants were shown the same faces from different views.
"Recognizing a face from a novel view is not an easy task, because faces can look quite different depending on the view," explains Chelnokova.
The scientists showed that 3-D information about facial structure helps us recognize faces from different views. They also saw that our visual system directs attention towards facial parts that provide us with necessary information quickly, such as the eyes.
Changes our behaviour
Earlier studies have already linked the brain reward system to our experience of others' facial beauty. In these studies, scientists scanned the participants' brain while they were looking at pictures of faces. The researchers showed that passive viewing of beautiful faces increases activity in the reward system.
However, this previous evidence is only correlational, meaning that the scientists only observed increased brain activation to attractive faces, but did not test whether this activity actually affects how much people liked the faces they saw.
The results from the current PhD dissertation were the first to demonstrate that changing the levels of activity within the brain reward system results in changes in behavior, such as liking attractive faces even more, and wanting to look longer at them.
"The importance of the eyes in our evaluation of others has been well documented. For instance, it is hard to recognize someone if their eyes are hidden, while if someone is lying to us, we can often see it in their eyes. In general, if we are to understand how another person feels, the eyes can give us most of the required information," says Olga Chelnokova.
Also the nose and the cheeks turned out to be important for the participants in the study, especially when looking at faces in 3-D, where these facial features provide us with valuable cues about the volumetric properties of a face.
Starting in the 1980s scientific advances allowed the use of DNA as a material for the identification of an individual. The first patent covering the direct use of DNA variation for forensics was filed by Jeffrey Glassberg in 1983, based upon work he had done while at Rockefeller University in 1981. In the United Kingdom, Geneticist Sir Alec Jeffreys     independently developed a DNA profiling process beginning in late 1984  while working in the Department of Genetics at the University of Leicester. 
The process, developed by Jeffreys in conjunction with Peter Gill and Dave Werrett of the Forensic Science Service (FSS), was first used forensically in the solving of the murder of two teenage girls who had been raped and murdered in Narborough, Leicestershire in 1983 and 1986. In the murder inquiry, led by Detective David Baker, the DNA contained within blood samples obtained voluntarily from around 5,000 local men who willingly assisted Leicestershire Constabulary with the investigation, resulted in the exoneration of a man who had confessed to one of the crimes, and the subsequent conviction of Colin Pitchfork. Pitchfork, a local bakery employee, had coerced his coworker Ian Kelly to stand in for him when providing a blood sample Kelly then used a forged passport to impersonate Pitchfork. Another coworker reported the deception to the police. Pitchfork was arrested, and his blood was sent to Jeffrey's lab for processing and profile development. Pitchfork's profile matched that of DNA left by the murderer which confirmed Pitchfork's presence at both crime scenes he pleaded guilty to both murders. 
Although 99.9% of human DNA sequences are the same in every person, enough of the DNA is different that it is possible to distinguish one individual from another, unless they are monozygotic (identical) twins.  DNA profiling uses repetitive sequences that are highly variable,  called variable number tandem repeats (VNTRs), in particular short tandem repeats (STRs), also known as microsatellites, and minisatellites. VNTR loci are similar between closely related individuals, but are so variable that unrelated individuals are unlikely to have the same VNTRs.
The process, developed by Glassberg and independently by Jeffreys, begins with a sample of an individual's DNA (typically called a "reference sample"). Reference samples are usually collected through a buccal swab. When this is unavailable (for example, when a court order is needed but unobtainable) other methods may be needed to collect a sample of blood, saliva, semen, vaginal lubrication, or other fluid or tissue from personal use items (for example, a toothbrush, razor) or from stored samples (for example, banked sperm or biopsy tissue). Samples obtained from blood relatives can indicate an individual's profile, as could previous profiled human remains. A reference sample is then analyzed to create the individual's DNA profile using one of the techniques discussed below. The DNA profile is then compared against another sample to determine whether there is a genetic match.
DNA extraction Edit
When a sample such as blood or saliva is obtained, the DNA is only a small part of what is present in the sample. Before the DNA can be analyzed, it must be extracted from the cells and purified. There are many ways this can be accomplished, but all methods follow the same basic procedure. The cell and nuclear membranes need to be broken up to allow the DNA to be free in solution. Once the DNA is free, it can be separated from all other cellular components. After the DNA has been separated in solution, the remaining cellular debris can then be removed from the solution and discarded, leaving only DNA. The most common methods of DNA extraction include organic extraction (also called phenol chloroform extraction), Chelex extraction, and solid phase extraction. Differential extraction is a modified version of extraction in which DNA from two different types of cells can be separated from each other before being purified from the solution. Each method of extraction works well in the laboratory, but analysts typically selects their preferred method based on factors such as the cost, the time involved, the quantity of DNA yielded, and the quality of DNA yielded.  After the DNA is extracted from the sample, it can be analyzed, whether it is by RFLP analysis or quantification and PCR analysis.
RFLP analysis Edit
The first methods for finding out genetics used for DNA profiling involved RFLP analysis. DNA is collected from cells and cut into small pieces using a restriction enzyme (a restriction digest). This generates DNA fragments of differing sizes as a consequence of variations between DNA sequences of different individuals. The fragments are then separated on the basis of size using gel electrophoresis. The separated fragments are then transferred on to a nitrocellulose or nylon filter this procedure is called a Southern blot. The DNA fragments within the blot are permanently fixed to the filter, and the DNA strands are denatured. Radiolabeled probe molecules are then added that are complementary to sequences in the genome that contain repeat sequences. These repeat sequences tend to vary in length among different individuals and are called variable number tandem repeat sequences or VNTRs. The probe molecules hybridize to DNA fragments containing the repeat sequences and excess probe molecules are washed away. The blot is then exposed to an X-ray film. Fragments of DNA that have bound to the probe molecules appear as fluorescent bands on the film.
The Southern blot technique requires large amounts of non-degraded sample DNA. Also, Alec Jeffrey's original multilocus RFLP technique looked at many minisatellite loci at the same time, increasing the observed variability, but making it hard to discern individual alleles (and thereby precluding paternity testing). These early techniques have been supplanted by PCR-based assays.
Polymerase chain reaction (PCR) analysis Edit
Developed by Kary Mullis in 1983, a process was reported by which specific portions of the sample DNA can be amplified almost indefinitely (Saiki et al. 1985, 1985). The process, polymerase chain reaction (PCR), mimics the biological process of DNA replication, but confines it to specific DNA sequences of interest. With the invention of the PCR technique, DNA profiling took huge strides forward in both discriminating power and the ability to recover information from very small (or degraded) starting samples.
PCR greatly amplifies the amounts of a specific region of DNA. In the PCR process, the DNA sample is denatured into the separate individual polynucleotide strands through heating. Two oligonucleotide DNA primers are used to hybridize to two corresponding nearby sites on opposite DNA strands in such a fashion that the normal enzymatic extension of the active terminal of each primer (that is, the 3’ end) leads toward the other primer. PCR uses replication enzymes that are tolerant of high temperatures, such as the thermostable Taq polymerase. In this fashion, two new copies of the sequence of interest are generated. Repeated denaturation, hybridization, and extension in this fashion produce an exponentially growing number of copies of the DNA of interest. Instruments that perform thermal cycling are readily available from commercial sources. This process can produce a million-fold or greater amplification of the desired region in 2 hours or less.
Early assays such as the HLA-DQ alpha reverse dot blot strips grew to be very popular owing to their ease of use, and the speed with which a result could be obtained. However, they were not as discriminating as RFLP analysis. It was also difficult to determine a DNA profile for mixed samples, such as a vaginal swab from a sexual assault victim.
However, the PCR method was readily adaptable for analyzing VNTR, in particular STR loci. In recent years, research in human DNA quantitation has focused on new "real-time" quantitative PCR (qPCR) techniques. Quantitative PCR methods enable automated, precise, and high-throughput measurements. Inter-laboratory studies have demonstrated the importance of human DNA quantitation on achieving reliable interpretation of STR typing and obtaining consistent results across laboratories.
STR analysis Edit
The system of DNA profiling used today is based on polymerase chain reaction (PCR) and uses simple sequences  or short tandem repeats (STR). This method uses highly polymorphic regions that have short repeated sequences of DNA (the most common is 4 bases repeated, but there are other lengths in use, including 3 and 5 bases). Because unrelated people almost certainly have different numbers of repeat units, STRs can be used to discriminate between unrelated individuals. These STR loci (locations on a chromosome) are targeted with sequence-specific primers and amplified using PCR. The DNA fragments that result are then separated and detected using electrophoresis. There are two common methods of separation and detection, capillary electrophoresis (CE) and gel electrophoresis.
Each STR is polymorphic, but the number of alleles is very small. Typically each STR allele will be shared by around 5–20% of individuals. The power of STR analysis derives from inspecting multiple STR loci simultaneously. The pattern of alleles can identify an individual quite accurately. Thus STR analysis provides an excellent identification tool. The more STR regions that are tested in an individual the more discriminating the test becomes.
From country to country, different STR-based DNA-profiling systems are in use. In North America, systems that amplify the CODIS 20  core loci are almost universal, whereas in the United Kingdom the DNA-17 loci system (which is compatible with The National DNA Database) is in use, and Australia uses 18 core markers.  Whichever system is used, many of the STR regions used are the same. These DNA-profiling systems are based on multiplex reactions, whereby many STR regions will be tested at the same time.
The true power of STR analysis is in its statistical power of discrimination. Because the 20 loci that are currently used for discrimination in CODIS are independently assorted (having a certain number of repeats at one locus does not change the likelihood of having any number of repeats at any other locus), the product rule for probabilities can be applied. This means that, if someone has the DNA type of ABC, where the three loci were independent, then the probability of that individual having that DNA type is the probability of having type A times the probability of having type B times the probability of having type C. This has resulted in the ability to generate match probabilities of 1 in a quintillion (1x10 18 ) or more. However, DNA database searches showed much more frequent than expected false DNA profile matches.  Moreover, since there are about 12 million monozygotic twins on Earth, the theoretical probability is not accurate.
In practice, the risk of contaminated-matching is much greater than matching a distant relative, such as contamination of a sample from nearby objects, or from left-over cells transferred from a prior test. The risk is greater for matching the most common person in the samples: Everything collected from, or in contact with, a victim is a major source of contamination for any other samples brought into a lab. For that reason, multiple control-samples are typically tested in order to ensure that they stayed clean, when prepared during the same period as the actual test samples. Unexpected matches (or variations) in several control-samples indicates a high probability of contamination for the actual test samples. In a relationship test, the full DNA profiles should differ (except for twins), to prove that a person was not actually matched as being related to their own DNA in another sample.
Another technique, AFLP, or amplified fragment length polymorphism was also put into practice during the early 1990s. This technique was also faster than RFLP analysis and used PCR to amplify DNA samples. It relied on variable number tandem repeat (VNTR) polymorphisms to distinguish various alleles, which were separated on a polyacrylamide gel using an allelic ladder (as opposed to a molecular weight ladder). Bands could be visualized by silver staining the gel. One popular focus for fingerprinting was the D1S80 locus. As with all PCR based methods, highly degraded DNA or very small amounts of DNA may cause allelic dropout (causing a mistake in thinking a heterozygote is a homozygote) or other stochastic effects. In addition, because the analysis is done on a gel, very high number repeats may bunch together at the top of the gel, making it difficult to resolve. AmpFLP analysis can be highly automated, and allows for easy creation of phylogenetic trees based on comparing individual samples of DNA. Due to its relatively low cost and ease of set-up and operation, AmpFLP remains popular in lower income countries.
DNA family relationship analysis Edit
Using PCR technology, DNA analysis is widely applied to determine genetic family relationships such as paternity, maternity, siblingship and other kinships.
During conception, the father's sperm cell and the mother's egg cell, each containing half the amount of DNA found in other body cells, meet and fuse to form a fertilized egg, called a zygote. The zygote contains a complete set of DNA molecules, a unique combination of DNA from both parents. This zygote divides and multiplies into an embryo and later, a full human being.
At each stage of development, all the cells forming the body contain the same DNA—half from the father and half from the mother. This fact allows the relationship testing to use all types of all samples including loose cells from the cheeks collected using buccal swabs, blood or other types of samples.
There are predictable inheritance patterns at certain locations (called loci) in the human genome, which have been found to be useful in determining identity and biological relationships. These loci contain specific DNA markers that scientists use to identify individuals. In a routine DNA paternity test, the markers used are short tandem repeats (STRs), short pieces of DNA that occur in highly differential repeat patterns among individuals.
Each person's DNA contains two copies of these markers—one copy inherited from the father and one from the mother. Within a population, the markers at each person's DNA location could differ in length and sometimes sequence, depending on the markers inherited from the parents.
The combination of marker sizes found in each person makes up their unique genetic profile. When determining the relationship between two individuals, their genetic profiles are compared to see if they share the same inheritance patterns at a statistically conclusive rate.
For example, the following sample report from this commercial DNA paternity testing laboratory Universal Genetics signifies how relatedness between parents and child is identified on those special markers:
|DNA marker||Mother||Child||Alleged father|
|D21S11||28, 30||28, 31.2||29, 31.2|
|D7S820||9, 10||10, 11||11, 12|
|TH01||6, 9.3||9, 9.3||8, 9|
|D13S317||10, 12||12, 13||11, 13|
|D19S433||14, 16.2||14, 15||14.2, 15|
The partial results indicate that the child and the alleged father's DNA match among these five markers. The complete test results show this correlation on 16 markers between the child and the tested man to enable a conclusion to be drawn as to whether or not the man is the biological father.
Each marker is assigned with a Paternity Index (PI), which is a statistical measure of how powerfully a match at a particular marker indicates paternity. The PI of each marker is multiplied with each other to generate the Combined Paternity Index (CPI), which indicates the overall probability of an individual being the biological father of the tested child relative to a randomly selected man from the entire population of the same race. The CPI is then converted into a Probability of Paternity showing the degree of relatedness between the alleged father and child.
The DNA test report in other family relationship tests, such as grandparentage and siblingship tests, is similar to a paternity test report. Instead of the Combined Paternity Index, a different value, such as a Siblingship Index, is reported.
The report shows the genetic profiles of each tested person. If there are markers shared among the tested individuals, the probability of biological relationship is calculated to determine how likely the tested individuals share the same markers due to a blood relationship.
Y-chromosome analysis Edit
Recent innovations have included the creation of primers targeting polymorphic regions on the Y-chromosome (Y-STR), which allows resolution of a mixed DNA sample from a male and female or cases in which a differential extraction is not possible. Y-chromosomes are paternally inherited, so Y-STR analysis can help in the identification of paternally related males. Y-STR analysis was performed in the Jefferson-Hemings controversy to determine if Thomas Jefferson had sired a son with one of his slaves.
The analysis of the Y-chromosome yields weaker results than autosomal chromosome analysis with regard to individual identification. The Y male sex-determining chromosome, as it is inherited only by males from their fathers, is almost identical along the paternal line. On the other hand, the Y-STR haplotype provides powerful genealogical information as a patrilinear relationship can be traced back over many generations.
Furthermore, due to the paternal inheritance, Y-haplotypes provide information about the genetic ancestry of the male population. To investigate this population history, and to provide estimates for haplotype frequencies in criminal casework, the "Y haplotype reference database (YHRD)" has been created in 2000 as an online resource. It currently comprises more than 300,000 minimal (8 locus) haplotypes from world-wide populations. 
Mitochondrial analysis Edit
For highly degraded samples, it is sometimes impossible to get a complete profile of the 13 CODIS STRs. In these situations, mitochondrial DNA (mtDNA) is sometimes typed due to there being many copies of mtDNA in a cell, while there may only be 1–2 copies of the nuclear DNA. Forensic scientists amplify the HV1 and HV2 regions of the mtDNA, and then sequence each region and compare single-nucleotide differences to a reference. Because mtDNA is maternally inherited, directly linked maternal relatives can be used as match references, such as one's maternal grandmother's daughter's son. In general, a difference of two or more nucleotides is considered to be an exclusion. Heteroplasmy and poly-C differences may throw off straight sequence comparisons, so some expertise on the part of the analyst is required. mtDNA is useful in determining clear identities, such as those of missing people when a maternally linked relative can be found. mtDNA testing was used in determining that Anna Anderson was not the Russian princess she had claimed to be, Anastasia Romanov.
mtDNA can be obtained from such material as hair shafts and old bones/teeth.  Control mechanism based on interaction point with data. This can be determined by tooled placement in sample. 
When people think of DNA analysis they often think about shows like NCIS or CSI, which portray DNA samples coming into a lab and then instantly analyzed, followed by pulling up a picture of the suspect within minutes. The true reality, however, is quite different and perfect DNA samples are often not collected from the scene of a crime. Homicide victims are frequently left exposed to harsh conditions before they are found and objects used to commit crimes have often been handled by more than one person. The two most prevalent issues that forensic scientists encounter when analyzing DNA samples are degraded samples and DNA mixtures.
Degraded DNA Edit
In the real world DNA labs often have to deal with DNA samples that are less than ideal. DNA samples taken from crime scenes are often degraded, which means that the DNA has started to break down into smaller fragments. Victims of homicides might not be discovered right away, and in the case of a mass casualty event it could be hard to get DNA samples before the DNA has been exposed to degradation elements.
Degradation or fragmentation of DNA at crime scenes can occur because of a number of reasons, with environmental exposure often being the most common cause. Biological samples that have been exposed to the environment can get degraded by water and enzymes called nucleases. Nucleases essentially ‘chew’ up the DNA into fragments over time and are found everywhere in nature.
Before modern PCR methods existed it was almost impossible to analyze degraded DNA samples. Methods like restriction fragment length polymorphism or RFLP Restriction fragment length polymorphism, which was the first technique used for DNA analysis in forensic science, required high molecular weight DNA in the sample in order to get reliable data. High molecular weight DNA however is something that is lacking in degraded samples, as the DNA is too fragmented to accurately carry out RFLP. It wasn't until modern day PCR techniques were invented that analysis of degraded DNA samples were able to be carried out Polymerase chain reaction. Multiplex PCR in particular made it possible to isolate and amplify the small fragments of DNA still left in degraded samples. When multiplex PCR methods are compared to the older methods like RFLP a vast difference can be seen. Multiplex PCR can theoretically amplify less than 1 ng of DNA, while RFLP had to have a least 100 ng of DNA in order to carry out an analysis. 
In terms of a forensic approach to a degraded DNA sample, STR loci STR analysis are often amplified using PCR-based methods. Though STR loci are amplified with greater probability of success with degraded DNA, there is still the possibility that larger STR loci will fail to amplify, and therefore, would likely yield a partial profile, which results in reduced statistical weight of association in the event of a match.
MiniSTR Analysis Edit
In instances where DNA samples are degraded, like in the case of intense fires or if all that remains are bone fragments, standard STR testing on these samples can be inadequate. When standard STR testing is done on highly degraded samples the larger STR loci often drop out, and only partial DNA profiles are obtained. While partial DNA profiles can be a powerful tool, the random match probabilities will be larger than if a full profile was obtained. One method that has been developed in order to analyse degraded DNA samples is to use miniSTR technology. In this new approach, primers are specially designed to bind closer to the STR region.  In normal STR testing the primers will bind to longer sequences that contain the STR region within the segment. MiniSTR analysis however will just target the STR location, and this results in a DNA product that is much smaller. 
By placing the primers closer to the actual STR regions, there is a higher chance that successful amplification of this region will occur. Successful amplification of these STR regions can now occur and more complete DNA profiles can be obtained. The success that smaller PCR products produce a higher success rate with highly degraded samples was first reported in 1995, when miniSTR technology was used to identify victims of the Waco fire.  In this case the fire at destroyed the DNA samples so badly that normal STR testing did not result in a positive ID on some of the victims.
DNA Mixtures Edit
Mixtures are another common issue that forensic scientists face when they are analyzing unknown or questionable DNA samples. A mixture is defined as a DNA sample that contains two or more individual contributors.  This can often occur when a DNA sample is swabbed from an item that is handled by more than one person or when a sample contains both the victim and assailants' DNA. The presence of more than one individual in a DNA sample can make it challenging to detect individual profiles, and interpretation of mixtures should only be done by highly trained individuals. Mixtures that contain two or three individuals can be interpreted, though it will be difficult. Mixtures that contain four or more individuals are much too convoluted to get individual profiles. One common scenario in which a mixture is often obtained is in the case of sexual assault. A sample may be collected that contains material from the victim, the victim's consensual sexual partners, and the perpetrator(s). 
As detection methods in DNA profiling advance, forensic scientists are seeing more DNA samples that contain mixtures, as even the smallest contributor is now able to be detected by modern tests. The ease in which forensic scientists have in interpenetrating DNA mixtures largely depends on the ratio of DNA present from each individual, the genotype combinations, and total amount of DNA amplified.  The DNA ratio is often the most important aspect to look at in determining whether a mixture can be interpreted. For example, in the case where a DNA sample had two contributors, it would be easy to interpret individual profiles if the ratio of DNA contributed by one person was much higher than the second person. When a sample has three or more contributors, it becomes extremely difficult to determine individual profiles. Fortunately, advancements in probabilistic genotyping could make this sort of determination possible in the future. Probabilistic genotyping uses complex computer software to run through thousands of mathematical computations in order to produce statistical likelihoods of individual genotypes found in a mixture.  Probabilistic genotyping software that are often used in labs today include STRmix and TrueAllele.
An early application of a DNA database was the compilation of a Mitochondrial DNA Concordance,  prepared by Kevin W. P. Miller and John L. Dawson at the University of Cambridge from 1996 to 1999  from data collected as part of Miller's PhD thesis. There are now several DNA databases in existence around the world. Some are private, but most of the largest databases are government-controlled. The United States maintains the largest DNA database, with the Combined DNA Index System (CODIS) holding over 13 million records as of May 2018.  The United Kingdom maintains the National DNA Database (NDNAD), which is of similar size, despite the UK's smaller population. The size of this database, and its rate of growth, are giving concern to civil liberties groups in the UK, where police have wide-ranging powers to take samples and retain them even in the event of acquittal.  The Conservative–Liberal Democrat coalition partially addressed these concerns with part 1 of the Protection of Freedoms Act 2012, under which DNA samples must be deleted if suspects are acquitted or not charged, except in relation to certain (mostly serious and/or sexual) offenses. Public discourse around the introduction of advanced forensic techniques (such as genetic genealogy using public genealogy databases and DNA phenotyping approaches) has been limited, disjointed, unfocused, and raises issues of privacy and consent that may warrant the establishment of additional legal protections. 
The U.S. Patriot Act of the United States provides a means for the U.S. government to get DNA samples from suspected terrorists. DNA information from crimes is collected and deposited into the CODIS database, which is maintained by the FBI. CODIS enables law enforcement officials to test DNA samples from crimes for matches within the database, providing a means of finding specific biological profiles associated with collected DNA evidence. 
When a match is made from a national DNA databank to link a crime scene to an offender having provided a DNA sample to a database, that link is often referred to as a cold hit. A cold hit is of value in referring the police agency to a specific suspect but is of less evidential value than a DNA match made from outside the DNA Databank. 
FBI agents cannot legally store DNA of a person not convicted of a crime. DNA collected from a suspect not later convicted must be disposed of and not entered into the database. In 1998, a man residing in the UK was arrested on accusation of burglary. His DNA was taken and tested, and he was later released. Nine months later, this man's DNA was accidentally and illegally entered in the DNA database. New DNA is automatically compared to the DNA found at cold cases and, in this case, this man was found to be a match to DNA found at a rape and assault case one year earlier. The government then prosecuted him for these crimes. During the trial the DNA match was requested to be removed from the evidence because it had been illegally entered into the database. The request was carried out.  The DNA of the perpetrator, collected from victims of rape, can be stored for years until a match is found. In 2014, to address this problem, Congress extended a bill that helps states deal with "a backlog" of evidence. 
As DNA profiling became a key piece of evidence in the court, defense lawyers based their arguments on statistical reasoning. For example: Given a match that had a 1 in 5 million probability of occurring by chance, the lawyer would argue that this meant that in a country of say 60 million people there were 12 people who would also match the profile. This was then translated to a 1 in 12 chance of the suspect's being the guilty one. This argument is not sound unless the suspect was drawn at random from the population of the country. In fact, a jury should consider how likely it is that an individual matching the genetic profile would also have been a suspect in the case for other reasons. Also, different DNA analysis processes can reduce the amount of DNA recovery if the procedures are not properly done. Therefore, the number of times a piece of evidence is sampled can diminish the DNA collection efficiency. Another spurious statistical argument is based on the false assumption that a 1 in 5 million probability of a match automatically translates into a 1 in 5 million probability of innocence and is known as the prosecutor's fallacy.
When using RFLP, the theoretical risk of a coincidental match is 1 in 100 billion (100,000,000,000), although the practical risk is actually 1 in 1000 because monozygotic twins are 0.2% of the human population.  Moreover, the rate of laboratory error is almost certainly higher than this, and often actual laboratory procedures do not reflect the theory under which the coincidence probabilities were computed. For example, the coincidence probabilities may be calculated based on the probabilities that markers in two samples have bands in precisely the same location, but a laboratory worker may conclude that similar—but not precisely identical—band patterns result from identical genetic samples with some imperfection in the agarose gel. However, in this case, the laboratory worker increases the coincidence risk by expanding the criteria for declaring a match. Recent studies have quoted relatively high error rates, which may be cause for concern.  In the early days of genetic fingerprinting, the necessary population data to accurately compute a match probability was sometimes unavailable. Between 1992 and 1996, arbitrary low ceilings were controversially put on match probabilities used in RFLP analysis rather than the higher theoretically computed ones.  Today, RFLP has become widely disused due to the advent of more discriminating, sensitive and easier technologies.
Since 1998, the DNA profiling system supported by The National DNA Database in the UK is the SGM+ DNA profiling system that includes 10 STR regions and a sex-indicating test. STRs do not suffer from such subjectivity and provide similar power of discrimination (1 in 10 13 for unrelated individuals if using a full SGM+ profile). Figures of this magnitude are not considered to be statistically supportable by scientists in the UK for unrelated individuals with full matching DNA profiles a match probability of 1 in a billion is considered statistically supportable. However, with any DNA technique, the cautious juror should not convict on genetic fingerprint evidence alone if other factors raise doubt. Contamination with other evidence (secondary transfer) is a key source of incorrect DNA profiles and raising doubts as to whether a sample has been adulterated is a favorite defense technique. More rarely, chimerism is one such instance where the lack of a genetic match may unfairly exclude a suspect.
Evidence of genetic relationship Edit
It is possible to use DNA profiling as evidence of genetic relationship, although such evidence varies in strength from weak to positive. Testing that shows no relationship is absolutely certain. Further, while almost all individuals have a single and distinct set of genes, ultra-rare individuals, known as "chimeras", have at least two different sets of genes. There have been two cases of DNA profiling that falsely suggested that a mother was unrelated to her children.  This happens when two eggs are fertilized at the same time and fuse together to create one individual instead of twins.
In one case, a criminal planted fake DNA evidence in his own body: John Schneeberger raped one of his sedated patients in 1992 and left semen on her underwear. Police drew what they believed to be Schneeberger's blood and compared its DNA against the crime scene semen DNA on three occasions, never showing a match. It turned out that he had surgically inserted a Penrose drain into his arm and filled it with foreign blood and anticoagulants.
The functional analysis of genes and their coding sequences (open reading frames [ORFs]) typically requires that each ORF be expressed, the encoded protein purified, antibodies produced, phenotypes examined, intracellular localization determined, and interactions with other proteins sought.  In a study conducted by the life science company Nucleix and published in the journal Forensic Science International, scientists found that an in vitro synthesized sample of DNA matching any desired genetic profile can be constructed using standard molecular biology techniques without obtaining any actual tissue from that person. Nucleix claims they can also prove the difference between non-altered DNA and any that was synthesized. 
In the case of the Phantom of Heilbronn, police detectives found DNA traces from the same woman on various crime scenes in Austria, Germany, and France—among them murders, burglaries and robberies. Only after the DNA of the "woman" matched the DNA sampled from the burned body of a male asylum seeker in France did detectives begin to have serious doubts about the DNA evidence. It was eventually discovered that DNA traces were already present on the cotton swabs used to collect the samples at the crime scene, and the swabs had all been produced at the same factory in Austria. The company's product specification said that the swabs were guaranteed to be sterile, but not DNA-free.
Familial DNA searching Edit
Familial DNA searching (sometimes referred to as "familial DNA" or "familial DNA database searching") is the practice of creating new investigative leads in cases where DNA evidence found at the scene of a crime (forensic profile) strongly resembles that of an existing DNA profile (offender profile) in a state DNA database but there is not an exact match.   After all other leads have been exhausted, investigators may use specially developed software to compare the forensic profile to all profiles taken from a state's DNA database to generate a list of those offenders already in the database who are most likely to be a very close relative of the individual whose DNA is in the forensic profile.  To eliminate the majority of this list when the forensic DNA is a man's, crime lab technicians conduct Y-STR analysis. Using standard investigative techniques, authorities are then able to build a family tree. The family tree is populated from information gathered from public records and criminal justice records. Investigators rule out family members' involvement in the crime by finding excluding factors such as sex, living out of state or being incarcerated when the crime was committed. They may also use other leads from the case, such as witness or victim statements, to identify a suspect. Once a suspect has been identified, investigators seek to legally obtain a DNA sample from the suspect. This suspect DNA profile is then compared to the sample found at the crime scene to definitively identify the suspect as the source of the crime scene DNA.
Familial DNA database searching was first used in an investigation leading to the conviction of Jeffrey Gafoor of the murder of Lynette White in the United Kingdom on 4 July 2003. DNA evidence was matched to Gafoor's nephew, who at 14 years old had not been born at the time of the murder in 1988. It was used again in 2004  to find a man who threw a brick from a motorway bridge and hit a lorry driver, killing him. DNA found on the brick matched that found at the scene of a car theft earlier in the day, but there were no good matches on the national DNA database. A wider search found a partial match to an individual on being questioned, this man revealed he had a brother, Craig Harman, who lived very close to the original crime scene. Harman voluntarily submitted a DNA sample, and confessed when it matched the sample from the brick.  Currently, familial DNA database searching is not conducted on a national level in the United States, where states determine how and when to conduct familial searches. The first familial DNA search with a subsequent conviction in the United States was conducted in Denver, Colorado, in 2008, using software developed under the leadership of Denver District Attorney Mitch Morrissey and Denver Police Department Crime Lab Director Gregg LaBerge.  California was the first state to implement a policy for familial searching under then Attorney General, now Governor, Jerry Brown.  In his role as consultant to the Familial Search Working Group of the California Department of Justice, former Alameda County Prosecutor Rock Harmon is widely considered to have been the catalyst in the adoption of familial search technology in California. The technique was used to catch the Los Angeles serial killer known as the "Grim Sleeper" in 2010.  It wasn't a witness or informant that tipped off law enforcement to the identity of the "Grim Sleeper" serial killer, who had eluded police for more than two decades, but DNA from the suspect's own son. The suspect's son had been arrested and convicted in a felony weapons charge and swabbed for DNA the year before. When his DNA was entered into the database of convicted felons, detectives were alerted to a partial match to evidence found at the "Grim Sleeper" crime scenes. David Franklin Jr., also known as the Grim Sleeper, was charged with ten counts of murder and one count of attempted murder.  More recently, familial DNA led to the arrest of 21-year-old Elvis Garcia on charges of sexual assault and false imprisonment of a woman in Santa Cruz in 2008.  In March 2011 Virginia Governor Bob McDonnell announced that Virginia would begin using familial DNA searches.  Other states are expected to follow.
At a press conference in Virginia on 7 March 2011, regarding the East Coast Rapist, Prince William County prosecutor Paul Ebert and Fairfax County Police Detective John Kelly said the case would have been solved years ago if Virginia had used familial DNA searching. Aaron Thomas, the suspected East Coast Rapist, was arrested in connection with the rape of 17 women from Virginia to Rhode Island, but familial DNA was not used in the case. 
Critics of familial DNA database searches argue that the technique is an invasion of an individual's 4th Amendment rights.  Privacy advocates are petitioning for DNA database restrictions, arguing that the only fair way to search for possible DNA matches to relatives of offenders or arrestees would be to have a population-wide DNA database.  Some scholars have pointed out that the privacy concerns surrounding familial searching are similar in some respects to other police search techniques,  and most have concluded that the practice is constitutional.  The Ninth Circuit Court of Appeals in United States v. Pool (vacated as moot) suggested that this practice is somewhat analogous to a witness looking at a photograph of one person and stating that it looked like the perpetrator, which leads law enforcement to show the witness photos of similar looking individuals, one of whom is identified as the perpetrator.  Regardless of whether familial DNA searching was the method used to identify the suspect, authorities always conduct a normal DNA test to match the suspect's DNA with that of the DNA left at the crime scene.
Critics also claim that racial profiling could occur on account of familial DNA testing. In the United States, the conviction rates of racial minorities are much higher than that of the overall population. It is unclear whether this is due to discrimination from police officers and the courts, as opposed to a simple higher rate of offence among minorities. Arrest-based databases, which are found in the majority of the United States, lead to an even greater level of racial discrimination. An arrest, as opposed to conviction, relies much more heavily on police discretion. 
For instance, investigators with Denver District Attorney's Office successfully identified a suspect in a property theft case using a familial DNA search. In this example, the suspect's blood left at the scene of the crime strongly resembled that of a current Colorado Department of Corrections prisoner.  Using publicly available records, the investigators created a family tree. They then eliminated all the family members who were incarcerated at the time of the offense, as well as all of the females (the crime scene DNA profile was that of a male). Investigators obtained a court order to collect the suspect's DNA, but the suspect actually volunteered to come to a police station and give a DNA sample. After providing the sample, the suspect walked free without further interrogation or detainment. Later confronted with an exact match to the forensic profile, the suspect pleaded guilty to criminal trespass at the first court date and was sentenced to two years probation.
In Italy a familiar DNA search has been done to solve the case of the murder of Yara Gambirasio whose body was found in the bush [ clarification needed ] three months after her disappearance. A DNA trace was found on the underwear of the murdered teenage near and a DNA sample was requested from a person who lived near the municipality of Brembate di Sopra and a common male ancestor was found in the DNA sample of a young man not involved in the murder. After a long investigation the father of the supposed killer was identified as Giuseppe Guerinoni, a deceased man, but his two sons born from his wife were not related to the DNA samples found on the body of Yara. After three and a half years the DNA found on the underwear of the deceased girl was matched with Massimo Giuseppe Bossetti who was arrested and accused of the murder of the 13-year-old girl. In the summer of 2016 Bossetti was found guilty and sentenced to life by the Corte d'assise of Bergamo.
Partial matches Edit
Partial DNA matches are the result of moderate stringency CODIS searches that produce a potential match that shares at least one allele at every locus.  Partial matching does not involve the use of familial search software, such as those used in the UK and United States, or additional Y-STR analysis, and therefore often misses sibling relationships. Partial matching has been used to identify suspects in several cases in the UK and United States,  and has also been used as a tool to exonerate the falsely accused. Darryl Hunt was wrongly convicted in connection with the rape and murder of a young woman in 1984 in North Carolina.  Hunt was exonerated in 2004 when a DNA database search produced a remarkably close match between a convicted felon and the forensic profile from the case. The partial match led investigators to the felon's brother, Willard E. Brown, who confessed to the crime when confronted by police. A judge then signed an order to dismiss the case against Hunt. In Italy, partial matching has been used in the controversial murder of Yara Gambirasio, a child found dead about a month after her presumed kidnapping. In this case, the partial match has been used as the only incriminating element against the defendant, Massimo Bossetti, who has been subsequently condemned for the murder (waiting appeal by the Italian Supreme Court).
Surreptitious DNA collecting Edit
Police forces may collect DNA samples without a suspect's knowledge, and use it as evidence. The legality of the practice has been questioned in Australia. 
In the United States, it has been accepted, courts often ruling that there is no expectation of privacy, citing California v. Greenwood (1988), in which the Supreme Court held that the Fourth Amendment does not prohibit the warrantless search and seizure of garbage left for collection outside the curtilage of a home. Critics of this practice underline that this analogy ignores that "most people have no idea that they risk surrendering their genetic identity to the police by, for instance, failing to destroy a used coffee cup. Moreover, even if they do realize it, there is no way to avoid abandoning one's DNA in public." 
The United States Supreme Court ruled in Maryland v. King (2013) that DNA sampling of prisoners arrested for serious crimes is constitutional.   
In the UK, the Human Tissue Act 2004 prohibits private individuals from covertly collecting biological samples (hair, fingernails, etc.) for DNA analysis, but exempts medical and criminal investigations from the prohibition. 
England and Wales Edit
Evidence from an expert who has compared DNA samples must be accompanied by evidence as to the sources of the samples and the procedures for obtaining the DNA profiles.  The judge must ensure that the jury must understand the significance of DNA matches and mismatches in the profiles. The judge must also ensure that the jury does not confuse the match probability (the probability that a person that is chosen at random has a matching DNA profile to the sample from the scene) with the probability that a person with matching DNA committed the crime. In 1996 R v. Doheny  Phillips LJ gave this example of a summing up, which should be carefully tailored to the particular facts in each case:
Members of the Jury, if you accept the scientific evidence called by the Crown, this indicates that there are probably only four or five white males in the United Kingdom from whom that semen stain could have come. The Defendant is one of them. If that is the position, the decision you have to reach, on all the evidence, is whether you are sure that it was the Defendant who left that stain or whether it is possible that it was one of that other small group of men who share the same DNA characteristics.
Juries should weigh up conflicting and corroborative evidence, using their own common sense and not by using mathematical formulae, such as Bayes' theorem, so as to avoid "confusion, misunderstanding and misjudgment". 
Presentation and evaluation of evidence of partial or incomplete DNA profiles Edit
In R v Bates,  Moore-Bick LJ said:
We can see no reason why partial profile DNA evidence should not be admissible provided that the jury are made aware of its inherent limitations and are given a sufficient explanation to enable them to evaluate it. There may be cases where the match probability in relation to all the samples tested is so great that the judge would consider its probative value to be minimal and decide to exclude the evidence in the exercise of his discretion, but this gives rise to no new question of principle and can be left for decision on a case by case basis. However, the fact that there exists in the case of all partial profile evidence the possibility that a "missing" allele might exculpate the accused altogether does not provide sufficient grounds for rejecting such evidence. In many there is a possibility (at least in theory) that evidence that would assist the accused and perhaps even exculpate him altogether exists, but that does not provide grounds for excluding relevant evidence that is available and otherwise admissible, though it does make it important to ensure that the jury are given sufficient information to enable them to evaluate that evidence properly. 
DNA testing in the United States Edit
There are state laws on DNA profiling in all 50 states of the United States.  Detailed information on database laws in each state can be found at the National Conference of State Legislatures website. 
Development of artificial DNA Edit
In August 2009, scientists in Israel raised serious doubts concerning the use of DNA by law enforcement as the ultimate method of identification. In a paper published in the journal Forensic Science International: Genetics, the Israeli researchers demonstrated that it is possible to manufacture DNA in a laboratory, thus falsifying DNA evidence. The scientists fabricated saliva and blood samples, which originally contained DNA from a person other than the supposed donor of the blood and saliva. 
The researchers also showed that, using a DNA database, it is possible to take information from a profile and manufacture DNA to match it, and that this can be done without access to any actual DNA from the person whose DNA they are duplicating. The synthetic DNA oligos required for the procedure are common in molecular laboratories. 
The New York Times quoted the lead author, Daniel Frumkin, saying, "You can just engineer a crime scene . any biology undergraduate could perform this".  Frumkin perfected a test that can differentiate real DNA samples from fake ones. His test detects epigenetic modifications, in particular, DNA methylation.  Seventy percent of the DNA in any human genome is methylated, meaning it contains methyl group modifications within a CpG dinucleotide context. Methylation at the promoter region is associated with gene silencing. The synthetic DNA lacks this epigenetic modification, which allows the test to distinguish manufactured DNA from genuine DNA. 
It is unknown how many police departments, if any, currently use the test. No police lab has publicly announced that it is using the new test to verify DNA results. 
- In 1986, Richard Buckland was exonerated, despite having admitted to the rape and murder of a teenager near Leicester, the city where DNA profiling was first developed. This was the first use of DNA fingerprinting in a criminal investigation, and the first to prove a suspect's innocence.  The following year Colin Pitchfork was identified as the perpetrator of the same murder, in addition to another, using the same techniques that had cleared Buckland. 
- In 1987, genetic fingerprinting was used in a US criminal court for the first time in the trial of a man accused of unlawful intercourse with a mentally handicapped 14-year-old female who gave birth to a baby. 
- In 1987, Florida rapist Tommie Lee Andrews was the first person in the United States to be convicted as a result of DNA evidence, for raping a woman during a burglary he was convicted on 6 November 1987, and sentenced to 22 years in prison. 
- In 1988, Timothy Wilson Spencer was the first man in Virginia to be sentenced to death through DNA testing, for several rape and murder charges. He was dubbed "The South Side Strangler" because he killed victims on the south side of Richmond, Virginia. He was later charged with rape and first-degree murder and was sentenced to death. He was executed on 27 April 1994. David Vasquez, initially convicted of one of Spencer's crimes, became the first man in America exonerated based on DNA evidence.
- In 1989, Chicago man Gary Dotson was the first person whose conviction was overturned using DNA evidence.
- In 1990, a violent murder of a young student in Brno was the first criminal case in Czechoslovakia solved by DNA evidence, with the murderer sentenced to 23 years in prison. 
- In 1991, Allan Legere was the first Canadian to be convicted as a result of DNA evidence, for four murders he had committed while an escaped prisoner in 1989. During his trial, his defense argued that the relatively shallow gene pool of the region could lead to false positives.
- In 1992, DNA evidence was used to prove that Nazi doctor Josef Mengele was buried in Brazil under the name Wolfgang Gerhard.
- In 1992, DNA from a palo verde tree was used to convict Mark Alan Bogan of murder. DNA from seed pods of a tree at the crime scene was found to match that of seed pods found in Bogan's truck. This is the first instance of plant DNA admitted in a criminal case. 
- In 1993, Kirk Bloodsworth was the first person to have been convicted of murder and sentenced to death, whose conviction was overturned using DNA evidence.
- The 1993 rape and murder of Mia Zapata, lead singer for the Seattle punk band The Gits, was unsolved nine years after the murder. A database search in 2001 failed, but the killer's DNA was collected when he was arrested in Florida for burglary and domestic abuse in 2002.
- The science was made famous in the United States in 1994 when prosecutors heavily relied on DNA evidence allegedly linking O. J. Simpson to a double murder. The case also brought to light the laboratory difficulties and handling procedure mishaps that can cause such evidence to be significantly doubted.
- In 1994, Royal Canadian Mounted Police (RCMP) detectives successfully tested hairs from a cat known as Snowball, and used the test to link a man to the murder of his wife, thus marking for the first time in forensic history the use of non-human animal DNA to identify a criminal (plant DNA was used in 1992, see above).
- In 1994, the claim that Anna Anderson was Grand Duchess Anastasia Nikolaevna of Russia was tested after her death using samples of her tissue that had been stored at a Charlottesville, Virginia hospital following a medical procedure. The tissue was tested using DNA fingerprinting, and showed that she bore no relation to the Romanovs. 
- In 1994, Earl Washington, Jr., of Virginia had his death sentence commuted to life imprisonment a week before his scheduled execution date based on DNA evidence. He received a full pardon in 2000 based on more advanced testing.  His case is often cited by opponents of the death penalty.
- In 1995, the British Forensic Science Service carried out its first mass intelligence DNA screening in the investigation of the Naomi Smith murder case.
- In 1998, Richard J. Schmidt was convicted of attempted second-degree murder when it was shown that there was a link between the viral DNA of the human immunodeficiency virus (HIV) he had been accused of injecting in his girlfriend and viral DNA from one of his patients with AIDS. This was the first time viral DNA fingerprinting had been used as evidence in a criminal trial.
- In 1999, Raymond Easton, a disabled man from Swindon, England, was arrested and detained for seven hours in connection with a burglary. He was released due to an inaccurate DNA match. His DNA had been retained on file after an unrelated domestic incident some time previously. 
- In 2000 Frank Lee Smith was proved innocent by DNA profiling of the murder of an eight-year-old girl after spending 14 years on death row in Florida, USA. However he had died of cancer just before his innocence was proven.  In view of this the Florida state governor ordered that in future any death row inmate claiming innocence should have DNA testing. 
- In May 2000 Gordon Graham murdered Paul Gault at his home in Lisburn, Northern Ireland. Graham was convicted of the murder when his DNA was found on a sports bag left in the house as part of an elaborate ploy to suggest the murder occurred after a burglary had gone wrong. Graham was having an affair with the victim's wife at the time of the murder. It was the first time Low Copy Number DNA was used in Northern Ireland. 
- In 2001, Wayne Butler was convicted for the murder of Celia Douty. It was the first murder in Australia to be solved using DNA profiling. 
- In 2002, the body of James Hanratty, hanged in 1962 for the "A6 murder", was exhumed and DNA samples from the body and members of his family were analysed. The results convinced Court of Appeal judges that Hanratty's guilt, which had been strenuously disputed by campaigners, was proved "beyond doubt".  Paul Foot and some other campaigners continued to believe in Hanratty's innocence and argued that the DNA evidence could have been contaminated, noting that the small DNA samples from items of clothing, kept in a police laboratory for over 40 years "in conditions that do not satisfy modern evidential standards", had had to be subjected to very new amplification techniques in order to yield any genetic profile.  However, no DNA other than Hanratty's was found on the evidence tested, contrary to what would have been expected had the evidence indeed been contaminated. 
- In 2002, DNA testing was used to exonerate Douglas Echols, a man who was wrongfully convicted in a 1986 rape case. Echols was the 114th person to be exonerated through post-conviction DNA testing.
- In August 2002, Annalisa Vincenzi was shot dead in Tuscany. Bartender Peter Hamkin, 23, was arrested, in Merseyside in March 2003 on an extradition warrant heard at Bow Street Magistrates' Court in London to establish whether he should be taken to Italy to face a murder charge. DNA "proved" he shot her, but he was cleared on other evidence. 
- In 2003, Welshman Jeffrey Gafoor was convicted of the 1988 murder of Lynette White, when crime scene evidence collected 12 years earlier was re-examined using STR techniques, resulting in a match with his nephew.  This may be the first known example of the DNA of an innocent yet related individual being used to identify the actual criminal, via "familial searching".
- In March 2003, Josiah Sutton was released from prison after serving four years of a twelve-year sentence for a sexual assault charge. Questionable DNA samples taken from Sutton were retested in the wake of the Houston Police Department's crime lab scandal of mishandling DNA evidence.
- In June 2003, because of new DNA evidence, Dennis Halstead, John Kogut and John Restivo won a re-trial on their murder conviction, their convictions were struck down and they were released.  The three men had already served eighteen years of their thirty-plus-year sentences.
- The trial of Robert Pickton (convicted in December 2003) is notable in that DNA evidence is being used primarily to identify the victims, and in many cases to prove their existence.
- In 2004, DNA testing shed new light into the mysterious 1912 disappearance of Bobby Dunbar, a four-year-old boy who vanished during a fishing trip. He was allegedly found alive eight months later in the custody of William Cantwell Walters, but another woman claimed that the boy was her son, Bruce Anderson, whom she had entrusted in Walters' custody. The courts disbelieved her claim and convicted Walters for the kidnapping. The boy was raised and known as Bobby Dunbar throughout the rest of his life. However, DNA tests on Dunbar's son and nephew revealed the two were not related, thus establishing that the boy found in 1912 was not Bobby Dunbar, whose real fate remains unknown. 
- In 2005, Gary Leiterman was convicted of the 1969 murder of Jane Mixer, a law student at the University of Michigan, after DNA found on Mixer's pantyhose was matched to Leiterman. DNA in a drop of blood on Mixer's hand was matched to John Ruelas, who was only four years old in 1969 and was never successfully connected to the case in any other way. Leiterman's defense unsuccessfully argued that the unexplained match of the blood spot to Ruelas pointed to cross-contamination and raised doubts about the reliability of the lab's identification of Leiterman. 
- In December 2005, Evan Simmons was proven innocent of a 1981 attack on an Atlanta woman after serving twenty-four years in prison. Mr. Clark is the 164th person in the United States and the fifth in Georgia to be freed using post-conviction DNA testing.
- In November 2008, Anthony Curcio was arrested for masterminding one of the most elaborately planned armored car heists in history. DNA evidence linked Curcio to the crime. 
- In March 2009, Sean Hodgson—convicted of 1979 killing of Teresa De Simone, 22, in her car in Southampton—was released after tests proved DNA from the scene was not his. It was later matched to DNA retrieved from the exhumed body of David Lace. Lace had previously confessed to the crime but was not believed by the detectives. He served time in prison for other crimes committed at the same time as the murder and then committed suicide in 1988. 
- In 2012, familial DNA profiling led to Alice Collins Plebuch's unexpected discovery that her ancestral bloodline was not purely Irish, as she had previously been led to believe, but that her heritage also contained European Jewish, Middle Eastern and Eastern European. This led her into an extensive genealogy investigation which resulted in her uncovering the genetic family of her father who had been switched at birth. 
- In 2016 Anthea Ring, abandoned as baby, was able to use a DNA sample and DNA matching database to discover her deceased mother's identity and roots in County Mayo, Ireland. A recently developed forensic test was subsequently used to capture DNA from saliva left on old stamps and envelopes by her suspected father, uncovered through painstaking genealogy research. The DNA in the first three samples was too degraded to use. However, on the fourth, more than enough DNA was found. The test, which has a degree of accuracy acceptable in UK courts, proved that a man named Patrick Coyne was her biological father. 
- In 2018 the Buckskin girl (a body found in 1981 in Ohio) was identified as Marcia King from Arkansas using DNA genealogical techniques 
- In 2018 Joseph James DeAngelo was arrested as the main suspect for the Golden State Killer using DNA and genealogy techniques. 
- In 2018, William Earl Talbott II was arrested as a suspect for the 1987 murders of Jay Cook and Tanya Van Cuylenborg with the assistance of genealogical DNA testing. The same genetic genealogist that helped in this case also helped police with 18 other arrests in 2018. 
- In 2019, dismembered remains found in a cave in Idaho in 1979 and 1991 were identified through genetic fingerprinting as belonging to Joseph Henry Loveless. Loveless was a habitual criminal who had disappeared after escaping from jail in 1916, where he had been charged with killing his wife Agnes with an axe. Clothes found with the remains matched the description of those Loveless was wearing when he made his escape.
DNA testing is used to establish the right of succession to British titles. 
Wait, I use a different noseband!
Let’s take a quick look at other popular types of nosebands
Flash Nosband - A flash noseband or flash cavesson looks very similar to a plain cavesson, with the addition of a smaller strap which buckles below the bit and helps to keep the horse’s mouth closed. This type of noseband should also fit two fingers below the zygomatic ridge, but with a one finger tightness. The flash strap should have a two finger tightness when fastened.
Figure 8 or Grackle – A Figure 8 or Grackle noseband crosses in front of the nose and fastens in two places behind the jaw. The center pad where the straps cross should fit high on the nose. The top straps will cross over the zygomatic ridge before buckling behind the jaw, the lower straps will buckle below the bit, behind the chin. Each strap should have a ½” or one finger tightness between the strap and the face.
Drop Noseband – A drop noseband fits low on the horse’s nose and also aids in keeping the horse’s mouth shut. The drop should fit below the bit but above the end of the nasal bone – the ends of the nasal bone are fragile, so if you’re uncertain regarding the fit you may want to seek assistance or try a different cavesson. When tightened, you should be able to fit one finger between the noseband and the face.
Field of View (FOV)
In a microscope, we ordinarily observe things within a circular space (or field) as defined by the lenses. We refer to this observable area as the field of view (FOV) . Understanding the size of the FOV is important because actual sizes of object can be calculated using the Magnification of the lenses.
FOV can be described as the area of a circle:
Area = &pi r 2
What are the effects of magnification on FOV?
1) Lowest Magnification 2) Low Magnification 3) High Magnification 4) Highest Magnification
In image 1, we can see a model of DNA on a table with a water bottle and a large area of the room. Image 2 displays less of the room in the background but the DNA model is larger in appearance because the magnification is greater. In image 3, we no longer see evidence of a door and the DNA model is much larger than before. In image 4, we no longer see the table the model and water bottle rest upon. While the last image is largest, we see less of the surrounding objects. We have higher magnification at the cost of field of view. FOV is inversely related to the magnification level.
Field of View Calculation
- Examine a ruler under scanning magnification
- Measure the diameter in mm
- diameter= _________________
- radius= ____________________
- Calculate the field of view at this magnification= __________________
- Examine a ruler under low magnification (10x)
- Measure the diameter in mm
- diameter= _________________
- radius= ____________________
- Calculate the field of view at this magnification= ____________________
- What is the relationship in the between the magnification and field of view?
- What is the proportion of change in field of view when doubling the magnification?
The Letter &ldquoe&rdquo
- Follow the link https://www.ncbionetwork.org/iet/microscope/
- Click on &ldquoExplore&rdquo &rarr Click the sample box &ldquo?&rdquo &rarr Click &ldquoSample Slides&rdquo
- Click &ldquoLetter E&rdquo
- What do you observe with the image under the microscope?
- The image is blurry so pull focus
- Draw the &ldquoe&rdquo at scanning, low and high magnification
We have developed microscale polymer capsules that are able to chemically degrade a certain type of polymeric microbead in their immediate vicinity. The inspiration here is from the body’s immune system, where killer T cells selectively destroy cancerous cells or cells infected by pathogens while leaving healthy cells alone. The “killer” capsules are made from the cationic biopolymer chitosan by a combination of ionic cross-linking (using multivalent tripolyposphate anions) and subsequent covalent cross-linking (using glutaraldehyde). During capsule formation, the enzyme glucose oxidase (GOx) is encapsulated in these capsules. The target beads are made by ionic cross-linking of the biopolymer alginate using copper (Cu 2+ ) cations. The killer capsules harvest glucose from their surroundings, which is then enzymatically converted by GOx into gluconate ions. These ions are known for their ability to chelate Cu 2+ cations. Thus, when a killer capsule is next to a target alginate bead, the gluconate ions diffuse into the bead and extract the Cu 2+ cross-links, causing the disintegration of the target bead. Such destruction is visualized in real-time using optical microscopy. The destruction is specific, i.e., other microparticles that do not contain Cu 2+ are left undisturbed. Moreover, the destruction is localized, i.e., the targets destroyed in the short term are the ones right next to the killer beads. The time scale for destruction depends on the concentration of encapsulated enzyme in the capsules.
While Robert Hooke’s discovery of cells in 1665 led to the proposal of the Cell Theory, Hooke misled the cell membrane theory that all cells contained a hard cell wall since only plant cells could be observed at the time.  Microscopists focused on the cell wall for well over 150 years until advances in microscopy were made. In the early 19th century, cells were recognized as being separate entities, unconnected, and bound by individual cell walls after it was found that plant cells could be separated. This theory extended to include animal cells to suggest a universal mechanism for cell protection and development. By the second half of the 19th century, microscopy was still not advanced enough to make a distinction between cell membranes and cell walls. However, some microscopists correctly identified at this time that while invisible, it could be inferred that cell membranes existed in animal cells due to intracellular movement of components internally but not externally and that membranes were not the equivalent of a cell wall to plant cell. It was also inferred that cell membranes were not vital components to all cells. Many refuted the existence of a cell membrane still towards the end of the 19th century. In 1890, an update to the Cell Theory stated that cell membranes existed, but were merely secondary structures. It was not until later studies with osmosis and permeability that cell membranes gained more recognition.  In 1895, Ernest Overton proposed that cell membranes were made of lipids. 
The lipid bilayer hypothesis, proposed in 1925 by Gorter and Grendel,  created speculation to the description of the cell membrane bilayer structure based on crystallographic studies and soap bubble observations. In an attempt to accept or reject the hypothesis, researchers measured membrane thickness.  In 1925 it was determined by Fricke that the thickness of erythrocyte and yeast cell membranes ranged between 3.3 and 4 nm, a thickness compatible with a lipid monolayer. The choice of the dielectric constant used in these studies was called into question but future tests could not disprove the results of the initial experiment. Independently, the leptoscope was invented in order to measure very thin membranes by comparing the intensity of light reflected from a sample to the intensity of a membrane standard of known thickness. The instrument could resolve thicknesses that depended on pH measurements and the presence of membrane proteins that ranged from 8.6 to 23.2 nm, with the lower measurements supporting the lipid bilayer hypothesis. Later in the 1930s, the membrane structure model developed in general agreement to be the paucimolecular model of Davson and Danielli (1935). This model was based on studies of surface tension between oils and echinoderm eggs. Since the surface tension values appeared to be much lower than would be expected for an oil–water interface, it was assumed that some substance was responsible for lowering the interfacial tensions in the surface of cells. It was suggested that a lipid bilayer was in between two thin protein layers. The paucimolecular model immediately became popular and it dominated cell membrane studies for the following 30 years, until it became rivaled by the fluid mosaic model of Singer and Nicolson (1972).  
Despite the numerous models of the cell membrane proposed prior to the fluid mosaic model, it remains the primary archetype for the cell membrane long after its inception in the 1970s.  Although the fluid mosaic model has been modernized to detail contemporary discoveries, the basics have remained constant: the membrane is a lipid bilayer composed of hydrophilic exterior heads and a hydrophobic interior where proteins can interact with hydrophilic heads through polar interactions, but proteins that span the bilayer fully or partially have hydrophobic amino acids that interact with the non-polar lipid interior. The fluid mosaic model not only provided an accurate representation of membrane mechanics, it enhanced the study of hydrophobic forces, which would later develop into an essential descriptive limitation to describe biological macromolecules. 
For many centuries, the scientists cited disagreed with the significance of the structure they were seeing as the cell membrane. For almost two centuries, the membranes were seen but mostly disregarded this as an important structure with cellular function. It was not until the 20th century that the significance of the cell membrane as it was acknowledged. Finally, two scientists Gorter and Grendel (1925) made the discovery that the membrane is “lipid-based”. From this, they furthered the idea that this structure would have to be in a formation that mimicked layers. Once studied further, it was found by comparing the sum of the cell surfaces and the surfaces of the lipids, a 2:1 ratio was estimated thus, providing the first basis of the bilayer structure known today. This discovery initiated many new studies that arose globally within various fields of scientific studies, confirming that the structure and functions of the cell membrane are widely accepted. 
The structure has been variously referred to by different writers as the ectoplast (de Vries, 1885),  Plasmahaut (plasma skin, Pfeffer, 1877, 1891),  Hautschicht (skin layer, Pfeffer, 1886 used with a different meaning by Hofmeister, 1867), plasmatic membrane (Pfeffer, 1900),  plasma membrane, cytoplasmic membrane, cell envelope and cell membrane.   Some authors who did not believe that there was a functional permeable boundary at the surface of the cell preferred to use the term plasmalemma (coined by Mast, 1924) for the external region of the cell.   
Cell membranes contain a variety of biological molecules, notably lipids and proteins. Composition is not set, but constantly changing for fluidity and changes in the environment, even fluctuating during different stages of cell development. Specifically, the amount of cholesterol in human primary neuron cell membrane changes, and this change in composition affects fluidity throughout development stages. 
Material is incorporated into the membrane, or deleted from it, by a variety of mechanisms:
- Fusion of intracellular vesicles with the membrane (exocytosis) not only excretes the contents of the vesicle but also incorporates the vesicle membrane's components into the cell membrane. The membrane may form blebs around extracellular material that pinch off to become vesicles (endocytosis).
- If a membrane is continuous with a tubular structure made of membrane material, then material from the tube can be drawn into the membrane continuously.
- Although the concentration of membrane components in the aqueous phase is low (stable membrane components have low solubility in water), there is an exchange of molecules between the lipid and aqueous phases.
The cell membrane consists of three classes of amphipathic lipids: phospholipids, glycolipids, and sterols. The amount of each depends upon the type of cell, but in the majority of cases phospholipids are the most abundant, often contributing for over 50% of all lipids in plasma membranes.   Glycolipids only account for a minute amount of about 2% and sterols make up the rest. In RBC studies, 30% of the plasma membrane is lipid. However, for the majority of eukaryotic cells, the composition of plasma membranes is about half lipids and half proteins by weight.
The fatty chains in phospholipids and glycolipids usually contain an even number of carbon atoms, typically between 16 and 20. The 16- and 18-carbon fatty acids are the most common. Fatty acids may be saturated or unsaturated, with the configuration of the double bonds nearly always "cis". The length and the degree of unsaturation of fatty acid chains have a profound effect on membrane fluidity as unsaturated lipids create a kink, preventing the fatty acids from packing together as tightly, thus decreasing the melting temperature (increasing the fluidity) of the membrane.   The ability of some organisms to regulate the fluidity of their cell membranes by altering lipid composition is called homeoviscous adaptation.
The entire membrane is held together via non-covalent interaction of hydrophobic tails, however the structure is quite fluid and not fixed rigidly in place. Under physiological conditions phospholipid molecules in the cell membrane are in the liquid crystalline state. It means the lipid molecules are free to diffuse and exhibit rapid lateral diffusion along the layer in which they are present.  However, the exchange of phospholipid molecules between intracellular and extracellular leaflets of the bilayer is a very slow process. Lipid rafts and caveolae are examples of cholesterol-enriched microdomains in the cell membrane.  Also, a fraction of the lipid in direct contact with integral membrane proteins, which is tightly bound to the protein surface is called annular lipid shell it behaves as a part of protein complex.
In animal cells cholesterol is normally found dispersed in varying degrees throughout cell membranes, in the irregular spaces between the hydrophobic tails of the membrane lipids, where it confers a stiffening and strengthening effect on the membrane.  Additionally, the amount of cholesterol in biological membranes varies between organisms, cell types, and even in individual cells. Cholesterol, a major component of animal plasma membranes, regulates the fluidity of the overall membrane, meaning that cholesterol controls the amount of movement of the various cell membrane components based on its concentrations.  In high temperatures, cholesterol inhibits the movement of phospholipid fatty acid chains, causing a reduced permeability to small molecules and reduced membrane fluidity. The opposite is true for the role of cholesterol in cooler temperatures. Cholesterol production, and thus concentration, is up-regulated (increased) in response to cold temperature. At cold temperatures, cholesterol interferes with fatty acid chain interactions. Acting as antifreeze, cholesterol maintains the fluidity of the membrane. Cholesterol is more abundant in cold-weather animals than warm-weather animals. In plants, which lack cholesterol, related compounds called sterols perform the same function as cholesterol. 
Phospholipids forming lipid vesicles
Lipid vesicles or liposomes are approximately spherical pockets that are enclosed by a lipid bilayer.  These structures are used in laboratories to study the effects of chemicals in cells by delivering these chemicals directly to the cell, as well as getting more insight into cell membrane permeability. Lipid vesicles and liposomes are formed by first suspending a lipid in an aqueous solution then agitating the mixture through sonication, resulting in a vesicle. By measuring the rate of efflux from that of the inside of the vesicle to the ambient solution, allows researcher to better understand membrane permeability. Vesicles can be formed with molecules and ions inside the vesicle by forming the vesicle with the desired molecule or ion present in the solution. Proteins can also be embedded into the membrane through solubilizing the desired proteins in the presence of detergents and attaching them to the phospholipids in which the liposome is formed. These provide researchers with a tool to examine various membrane protein functions.
Plasma membranes also contain carbohydrates, predominantly glycoproteins, but with some glycolipids (cerebrosides and gangliosides). Carbohydrates are important in the role of cell-cell recognition in eukaryotes they are located on the surface of the cell where they recognize host cells and share information, viruses that bind to cells using these receptors cause an infection  For the most part, no glycosylation occurs on membranes within the cell rather generally glycosylation occurs on the extracellular surface of the plasma membrane. The glycocalyx is an important feature in all cells, especially epithelia with microvilli. Recent data suggest the glycocalyx participates in cell adhesion, lymphocyte homing,  and many others. The penultimate sugar is galactose and the terminal sugar is sialic acid, as the sugar backbone is modified in the Golgi apparatus. Sialic acid carries a negative charge, providing an external barrier to charged particles.
Type Description Examples Integral proteins
or transmembrane proteins
Span the membrane and have a hydrophilic cytosolic domain, which interacts with internal molecules, a hydrophobic membrane-spanning domain that anchors it within the cell membrane, and a hydrophilic extracellular domain that interacts with external molecules. The hydrophobic domain consists of one, multiple, or a combination of α-helices and β sheet protein motifs. Ion channels, proton pumps, G protein-coupled receptor Lipid anchored proteins Covalently bound to single or multiple lipid molecules hydrophobically insert into the cell membrane and anchor the protein. The protein itself is not in contact with the membrane. G proteins Peripheral proteins Attached to integral membrane proteins, or associated with peripheral regions of the lipid bilayer. These proteins tend to have only temporary interactions with biological membranes, and once reacted, the molecule dissociates to carry on its work in the cytoplasm. Some enzymes, some hormones
The cell membrane has large content of proteins, typically around 50% of membrane volume  These proteins are important for the cell because they are responsible for various biological activities. Approximately a third of the genes in yeast code specifically for them, and this number is even higher in multicellular organisms.  Membrane proteins consist of three main types: integral proteins, peripheral proteins, and lipid-anchored proteins. 
As shown in the adjacent table, integral proteins are amphipathic transmembrane proteins. Examples of integral proteins include ion channels, proton pumps, and g-protein coupled receptors. Ion channels allow inorganic ions such as sodium, potassium, calcium, or chlorine to diffuse down their electrochemical gradient across the lipid bilayer through hydrophilic pores across the membrane. The electrical behavior of cells (i.e. nerve cells) are controlled by ion channels.  Proton pumps are protein pumps that are embedded in the lipid bilayer that allow protons to travel through the membrane by transferring from one amino acid side chain to another. Processes such as electron transport and generating ATP use proton pumps.  A G-protein coupled receptor is a single polypeptide chain that crosses the lipid bilayer seven times responding to signal molecules (i.e. hormones and neurotransmitters). G-protein coupled receptors are used in processes such as cell to cell signaling, the regulation of the production of cAMP, and the regulation of ion channels. 
The cell membrane, being exposed to the outside environment, is an important site of cell–cell communication. As such, a large variety of protein receptors and identification proteins, such as antigens, are present on the surface of the membrane. Functions of membrane proteins can also include cell–cell contact, surface recognition, cytoskeleton contact, signaling, enzymatic activity, or transporting substances across the membrane.
Most membrane proteins must be inserted in some way into the membrane.  For this to occur, an N-terminus "signal sequence" of amino acids directs proteins to the endoplasmic reticulum, which inserts the proteins into a lipid bilayer. Once inserted, the proteins are then transported to their final destination in vesicles, where the vesicle fuses with the target membrane.
The cell membrane surrounds the cytoplasm of living cells, physically separating the intracellular components from the extracellular environment. The cell membrane also plays a role in anchoring the cytoskeleton to provide shape to the cell, and in attaching to the extracellular matrix and other cells to hold them together to form tissues. Fungi, bacteria, most archaea, and plants also have a cell wall, which provides a mechanical support to the cell and precludes the passage of larger molecules.
The cell membrane is selectively permeable and able to regulate what enters and exits the cell, thus facilitating the transport of materials needed for survival. The movement of substances across the membrane can be either "passive", occurring without the input of cellular energy, or "active", requiring the cell to expend energy in transporting it. The membrane also maintains the cell potential. The cell membrane thus works as a selective filter that allows only certain things to come inside or go outside the cell. The cell employs a number of transport mechanisms that involve biological membranes:
1. Passive osmosis and diffusion: Some substances (small molecules, ions) such as carbon dioxide (CO2) and oxygen (O2), can move across the plasma membrane by diffusion, which is a passive transport process. Because the membrane acts as a barrier for certain molecules and ions, they can occur in different concentrations on the two sides of the membrane. Diffusion occurs when small molecules and ions move freely from high concentration to low concentration in order to equilibrate the membrane. It is considered a passive transport process because it does not require energy and is propelled by the concentration gradient created by each side of the membrane.  Such a concentration gradient across a semipermeable membrane sets up an osmotic flow for the water. Osmosis, in biological systems involves a solvent, moving through a semipermeable membrane similarly to passive diffusion as the solvent still moves with the concentration gradient and requires no energy. While water is the most common solvent in cell, it can also be other liquids as well as supercritical liquids and gases. 
2. Transmembrane protein channels and transporters: Transmembrane proteins extend through the lipid bilayer of the membranes they function on both sides of the membrane to transport molecules across it.  Nutrients, such as sugars or amino acids, must enter the cell, and certain products of metabolism must leave the cell. Such molecules can diffuse passively through protein channels such as aquaporins in facilitated diffusion or are pumped across the membrane by transmembrane transporters. Protein channel proteins, also called permeases, are usually quite specific, and they only recognize and transport a limited variety of chemical substances, often limited to a single substance. Another example of a transmembrane protein is a cell-surface receptor, which allow cell signaling molecules to communicate between cells. 
3. Endocytosis: Endocytosis is the process in which cells absorb molecules by engulfing them. The plasma membrane creates a small deformation inward, called an invagination, in which the substance to be transported is captured. This invagination is caused by proteins on the outside on the cell membrane, acting as receptors and clustering into depressions that eventually promote accumulation of more proteins and lipids on the cytosolic side of the membrane.  The deformation then pinches off from the membrane on the inside of the cell, creating a vesicle containing the captured substance. Endocytosis is a pathway for internalizing solid particles ("cell eating" or phagocytosis), small molecules and ions ("cell drinking" or pinocytosis), and macromolecules. Endocytosis requires energy and is thus a form of active transport.
4. Exocytosis: Just as material can be brought into the cell by invagination and formation of a vesicle, the membrane of a vesicle can be fused with the plasma membrane, extruding its contents to the surrounding medium. This is the process of exocytosis. Exocytosis occurs in various cells to remove undigested residues of substances brought in by endocytosis, to secrete substances such as hormones and enzymes, and to transport a substance completely across a cellular barrier. In the process of exocytosis, the undigested waste-containing food vacuole or the secretory vesicle budded from Golgi apparatus, is first moved by cytoskeleton from the interior of the cell to the surface. The vesicle membrane comes in contact with the plasma membrane. The lipid molecules of the two bilayers rearrange themselves and the two membranes are, thus, fused. A passage is formed in the fused membrane and the vesicles discharges its contents outside the cell.
Prokaryotes are divided into two different groups, Archaea and Bacteria, with bacteria dividing further into gram-positive and gram-negative. Gram-negative bacteria have both a plasma membrane and an outer membrane separated by periplasm, however, other prokaryotes have only a plasma membrane. These two membranes differ in many aspects. The outer membrane of the gram-negative bacteria differ from other prokaryotes due to phospholipids forming the exterior of the bilayer, and lipoproteins and phospholipids forming the interior.  The outer membrane typically has a porous quality due to its presence of membrane proteins, such as gram-negative porins, which are pore-forming proteins. The inner, plasma membrane is also generally symmetric whereas the outer membrane is asymmetric because of proteins such as the aforementioned. Also, for the prokaryotic membranes, there are multiple things that can affect the fluidity. One of the major factors that can affect the fluidity is fatty acid composition. For example, when the bacteria Staphylococcus aureus was grown in 37 ◦ C for 24h, the membrane exhibited a more fluid state instead of a gel-like state. This supports the concept that in higher temperatures, the membrane is more fluid than in colder temperatures. When the membrane is becoming more fluid and needs to become more stabilized, it will make longer fatty acid chains or saturated fatty acid chains in order to help stabilize the membrane.  Bacteria are also surrounded by a cell wall composed of peptidoglycan (amino acids and sugars). Some eukaryotic cells also have cell walls, but none that are made of peptidoglycan. The outer membrane of gram negative bacteria is rich in lipopolysaccharides, which are combined poly- or oligosaccharide and carbohydrate lipid regions that stimulate the cell's natural immunity.  The outer membrane can bleb out into periplasmic protrusions under stress conditions or upon virulence requirements while encountering a host target cell, and thus such blebs may work as virulence organelles.  Bacterial cells provide numerous examples of the diverse ways in which prokaryotic cell membranes are adapted with structures that suit the organism's niche. For example, proteins on the surface of certain bacterial cells aid in their gliding motion.  Many gram-negative bacteria have cell membranes which contain ATP-driven protein exporting systems. 
Fluid mosaic model
According to the fluid mosaic model of S. J. Singer and G. L. Nicolson (1972), which replaced the earlier model of Davson and Danielli, biological membranes can be considered as a two-dimensional liquid in which lipid and protein molecules diffuse more or less easily.  Although the lipid bilayers that form the basis of the membranes do indeed form two-dimensional liquids by themselves, the plasma membrane also contains a large quantity of proteins, which provide more structure. Examples of such structures are protein-protein complexes, pickets and fences formed by the actin-based cytoskeleton, and potentially lipid rafts.
Lipid bilayers form through the process of self-assembly. The cell membrane consists primarily of a thin layer of amphipathic phospholipids that spontaneously arrange so that the hydrophobic "tail" regions are isolated from the surrounding water while the hydrophilic "head" regions interact with the intracellular (cytosolic) and extracellular faces of the resulting bilayer. This forms a continuous, spherical lipid bilayer. Hydrophobic interactions (also known as the hydrophobic effect) are the major driving forces in the formation of lipid bilayers. An increase in interactions between hydrophobic molecules (causing clustering of hydrophobic regions) allows water molecules to bond more freely with each other, increasing the entropy of the system. This complex interaction can include noncovalent interactions such as van der Waals, electrostatic and hydrogen bonds.
Lipid bilayers are generally impermeable to ions and polar molecules. The arrangement of hydrophilic heads and hydrophobic tails of the lipid bilayer prevent polar solutes (ex. amino acids, nucleic acids, carbohydrates, proteins, and ions) from diffusing across the membrane, but generally allows for the passive diffusion of hydrophobic molecules. This affords the cell the ability to control the movement of these substances via transmembrane protein complexes such as pores, channels and gates. Flippases and scramblases concentrate phosphatidyl serine, which carries a negative charge, on the inner membrane. Along with NANA, this creates an extra barrier to charged moieties moving through the membrane.
Membranes serve diverse functions in eukaryotic and prokaryotic cells. One important role is to regulate the movement of materials into and out of cells. The phospholipid bilayer structure (fluid mosaic model) with specific membrane proteins accounts for the selective permeability of the membrane and passive and active transport mechanisms. In addition, membranes in prokaryotes and in the mitochondria and chloroplasts of eukaryotes facilitate the synthesis of ATP through chemiosmosis. 
The apical membrane of a polarized cell is the surface of the plasma membrane that faces inward to the lumen. This is particularly evident in epithelial and endothelial cells, but also describes other polarized cells, such as neurons. The basolateral membrane of a polarized cell is the surface of the plasma membrane that forms its basal and lateral surfaces. It faces outwards, towards the interstitium, and away from the lumen. Basolateral membrane is a compound phrase referring to the terms "basal (base) membrane" and "lateral (side) membrane", which, especially in epithelial cells, are identical in composition and activity. Proteins (such as ion channels and pumps) are free to move from the basal to the lateral surface of the cell or vice versa in accordance with the fluid mosaic model. Tight junctions join epithelial cells near their apical surface to prevent the migration of proteins from the basolateral membrane to the apical membrane. The basal and lateral surfaces thus remain roughly equivalent [ clarification needed ] to one another, yet distinct from the apical surface.
Cell membrane can form different types of "supramembrane" structures such as caveola, postsynaptic density, podosome, invadopodium, focal adhesion, and different types of cell junctions. These structures are usually responsible for cell adhesion, communication, endocytosis and exocytosis. They can be visualized by electron microscopy or fluorescence microscopy. They are composed of specific proteins, such as integrins and cadherins.
The cytoskeleton is found underlying the cell membrane in the cytoplasm and provides a scaffolding for membrane proteins to anchor to, as well as forming organelles that extend from the cell. Indeed, cytoskeletal elements interact extensively and intimately with the cell membrane.  Anchoring proteins restricts them to a particular cell surface — for example, the apical surface of epithelial cells that line the vertebrate gut — and limits how far they may diffuse within the bilayer. The cytoskeleton is able to form appendage-like organelles, such as cilia, which are microtubule-based extensions covered by the cell membrane, and filopodia, which are actin-based extensions. These extensions are ensheathed in membrane and project from the surface of the cell in order to sense the external environment and/or make contact with the substrate or other cells. The apical surfaces of epithelial cells are dense with actin-based finger-like projections known as microvilli, which increase cell surface area and thereby increase the absorption rate of nutrients. Localized decoupling of the cytoskeleton and cell membrane results in formation of a bleb.
The content of the cell, inside the cell membrane, is composed of numerous membrane-bound organelles, which contribute to the overall function of the cell. The origin, structure, and function of each organelle leads to a large variation in the cell composition due to the individual uniqueness associated with each organelle.
- Mitochondria and chloroplasts are considered to have evolved from bacteria, known as the endosymbiotic theory. This theory arose from the idea that Paracoccus and Rhodopseaudomonas, types of bacteria, share similar functions to mitochondria and blue-green algae, or cyanobacteria, share similar functions to chloroplasts. The endosymbiotic theory proposes that through the course of evolution, a eukaryotic cell engulfed these 2 types of bacteria, leading to the formation of mitochondria and chloroplasts inside eukaryotic cells. This engulfment lead to the 2 membranes systems of these organelles in which the outer membrane originated from the host's plasma membrane and the inner membrane was the endosymbiont's plasma membrane. Considering that mitochondria and chloroplasts both contain their own DNA is further support that both of these organelles evolved from engulfed bacteria that thrived inside a eukaryotic cell. 
- In eukaryotic cells, the nuclear membrane separates the contents of the nucleus from the cytoplasm of the cell.  The nuclear membrane is formed by an inner and outer membrane, providing the strict regulation of materials in to and out of the nucleus. Materials move between the cytosol and the nucleus through nuclear pores in the nuclear membrane. If a cell's nucleus is more active in transcription, its membrane will have more pores. The protein composition of the nucleus can vary greatly from the cytosol as many proteins are unable to cross through pores via diffusion. Within the nuclear membrane, the inner and outer membranes vary in protein composition, and only the outer membrane is continuous with the endoplasmic reticulum (ER) membrane. Like the ER, the outer membrane also possesses ribosomes responsible for producing and transporting proteins into the space between the two membranes. The nuclear membrane disassembles during the early stages of mitosis and reassembles in later stages of mitosis. 
- The ER, which is part of the endomembrane system, which makes up a very large portion of the cell's total membrane content. The ER is an enclosed network of tubules and sacs, and its main functions include protein synthesis, and lipid metabolism. There are 2 types of ER, smooth and rough. The rough ER has ribosomes attached to it used for protein synthesis, while the smooth ER is used more for the processing of toxins and calcium regulation in the cell. 
- The Golgi apparatus has two interconnected round Golgi cisternae. Compartments of the apparatus forms multiple tubular-reticular networks responsible for organization, stack connection and cargo transport that display a continuous grape-like stringed vesicles ranging from 50-60 nm. The apparatus consists of three main compartments, a flat disc-shaped cisterna with tubular-reticular networks and vesicles. 
The cell membrane has different lipid and protein compositions in distinct types of cells and may have therefore specific names for certain cell types.
- in muscle cells: Sarcolemma is the name given to the cell membrane of muscle cells.  Although the sarcolemma is similar to other cell membranes, it has other functions that set it apart. For instance, the sarcolemma transmits synaptic signals, helps generate action potentials, and is very involved in muscle contraction.  Unlike other cell membranes, the sarcolemma makes up small channels called T-tubules that pass through the entirety of muscle cells. It has also been found that the average sarcolemma is 10 nm thick as opposed to the 4 nm thickness of a general cell membrane. 
- Oolemma is the cell membrane in oocytes: The oolemma of oocytes, (immature egg cells) are not consistent with a lipid bilayer as they lack a bilayer and do not consist of lipids.  Rather, the structure has an inner layer, the fertilization envelope, and the exterior is made up of the vitelline layer, which is made up of glycoproteins however, channels and proteins are still present for their functions in the membrane. : The specialized plasma membrane on the axons of nerve cells that is responsible for the generation of the action potential. It consists of a granular, densely packed lipid bilayer that works closely with the cytoskeleton components spectrin and actin. These cytoskeleton components are able to bind to and interact with transmembrane proteins in the axolemma. 
The permeability of a membrane is the rate of passive diffusion of molecules through the membrane. These molecules are known as permeant molecules. Permeability depends mainly on the electric charge and polarity of the molecule and to a lesser extent the molar mass of the molecule. Due to the cell membrane's hydrophobic nature, small electrically neutral molecules pass through the membrane more easily than charged, large ones. The inability of charged molecules to pass through the cell membrane results in pH partition of substances throughout the fluid compartments of the body.
How Does Gaseous Exchange Occur in the Alveoli?
Gaseous exchange occurs in the alveoli by simple diffusion. The blood flowing past the alveoli is rich in carbon dioxide and very poor in oxygen. The gas molecules naturally flow in the direction of lower concentration through the thin gas exchange membrane, which is only two cells thick.
Alveoli are tiny balloon-like structures that inflate with each inhalation. The membranes that surround these tiny sacs are only one cell thick, and they are coated with a special fluid to enable inflation and dissolve gases. This fluid contains a substance that reduces the surface tension, which could otherwise cause the alveoli to collapse. The sacs have tiny blood vessels in direct contact with them, and these blood vessels also have walls that are only one cell thick.
The alveoli are inflated when the diaphragm contracts and expands the chest cavity. This causes the pressure in the alveoli to drop below atmospheric pressure and air to rush in to inflate them. However, the mix of gases in the lungs is very different than the mix in air because the lungs are constantly releasing carbon dioxide. The body is constantly consuming oxygen and creating carbon dioxide through metabolic processes, and the lungs do not completely empty on exhalation.
The Single Trait Cross (Monohybrid Cross)
Monohybrid cross (one trait cross) observing the pod shape of peas.
Monohybrid cross (on trait cross) observing the pod color of peas.
Corn Coloration in an F2 Population (activity)
A corn cob contains hundreds of kernels. Each kernel is a seed that represents an individual organism. In the cob, we can easily see kernel color as a phenotype.