Saturday, June 1, 2019

Ten myths about vaccination

I was recently asked to put together a list of common vaccine myths for a university course that a friend teaches. Since this information is helpful for others, I thought I'd share it on my blog.

These are just a few of the myths that I've seen about vaccines and vaccination. I've chosen 10 myths that I commonly come across in sharing science on social media.

Vaccines are 100% safe and effective

This is something that people sometimes think about vaccines. However, vaccines do carry some risks. Most are mild and soreness at the injection site is the most common. The more serious risks, such as an allergic reaction to a vaccine ingredient, are rare and happen once in every million or so doses. Many people will look at these risks and conclude that vaccines are worse than the diseases vaccines protect against. However, the serious risks from these diseases are very common (ranging from one in ten to one in one thousand people infected) and are far more severe than the most severe complication from vaccines.

For more information on the risks of vaccines and vaccine preventable diseases, see this resource: https://www.cdc.gov/vaccines/pubs/pinkbook/index.html`

I have a series of infographics on vaccine risks compared to the risks of vaccine-preventable diseases: https://themadvirologist.blogspot.com/2016/04/vaccine-infographics.html


Vaccines cause autism
This is a common misconception that was started by a doctor who has since been stripped of his medical credentials due to the fraud he committed in trying to make the link. Many studies have been conducted since then with millions of kids with and without autism, including siblings, and no link between vaccination and autism has been seen. Some of this research was even funded by anti-vaccine groups without a link being seen. The reason why so many still believe this is because of the timing. A child will get the MMR vaccine and then a little bit later get diagnosed with autism. This type of correlation makes it seem like it is happening, but this type of association on its own isn’t reliable. About the time that kids are given the MMR vaccine is when signs of autism are first becoming noticed which is what leads to diagnosis. The association is just coincidental. What causes autism is unknown, but right now the research suggests that autism is a genetic condition. The repetition of the myth that vaccines cause autism is harmful to autistic people as it implies that it would be better to die of a disease, like the measles, rather than get autism.

For the findings of Andy Wakefield’s fraud, see this article in the BMJ: https://www.bmj.com/content/342/bmj.c5347

A study funded by an anti-vaccine group that found no link between the MMR vaccine and autism: https://www.pnas.org/content/114/16/4031
A good article looking at vaccination, autism and temporal associations: https://thelogicofscience.com/2016/06/28/why-are-there-so-many-reports-of-autism-following-vaccination-a-mathematical-assessment/
An article on genetics and autism: https://www.spectrumnews.org/news/autism-genetics-explained/

Vaccines are corporate cash cows
There is a common idea that vaccines are pushed in order to make money off of people rather than to help the public. However, most vaccines cost healthcare providers more than they make giving vaccines. Additionally, vaccines only make up a small portion of the sales from drug companies and the profit margin is lower than other drugs. Vaccine preventable diseases tend to be much more expensive than vaccines. For example, hospitalization from the measles can cost tens of thousands of dollars. If this type of mindset were true, drug manufacturers and healthcare professionals would oppose vaccines as they would stand to make more off of vaccine preventable diseases than from vaccines. This myth simply isn’t true.

The costs of vaccines for healthcare professionals: https://www.npr.org/sections/health-shots/2011/10/17/141429853/vaccinations-can-be-money-losers-for-doctors

The costs of manufacturing vaccines: https://www.skepticalraptor.com/skepticalraptorblog.php/the-myth-of-big-pharma-vaccine-profits-updated/
The cost of containing one case of measles (from 2004): https://pediatrics.aappublications.org/content/116/1/e1


Vaccine manufacturers can’t  be sued
Some people claim that vaccine manufacturers cannot be sued. However, lawsuits against vaccine manufacturers can and do happen. This myth just isn’t based in reality.

An article discussing a lawsuit against Merck over the shingles vaccine: https://www.skepticalraptor.com/skepticalraptorblog.php/merck-shingles-vaccine-lawsuit-facts/


The diseases vaccines target are mild childhood diseases
This is a claim that is often used to try and negate the importance of vaccines. Part of the issue is that vaccines are so successful that people have forgotten how bad these diseases actually are. Measles kills one out of every 1000 people that are infected if medical care is available. Without medical care, the death rate is much higher with up to one in ten dying from the virus. Beyond causing death, many of these diseases also cause long-term harm such as blindness, deafness, paralysis, etc. Others cause more severe symptoms, including an increased risk of death, if someone gets the disease when they are older. Still others cause cancer that the vaccine would prevent.

The severity of vaccine preventable diseases in adults: https://www.cdc.gov/vaccines/adults/vpd.html

Some images of vaccine preventable diseases (warning some are graphic): http://www.immunize.org/catg.d/s8010.pdf
Some infographics I've made on vaccines and vaccine preventable diseases: https://themadvirologist.blogspot.com/2016/04/vaccine-infographics.html


Vaccines cause viruses and bacteria to mutate
This myth is a bit trickier to address since there is some truth to the idea but it doesn’t apply the way people think of it. Viruses, bacteria and other microbes are constantly evolving and mutating to be successful (ie., replicate). Vaccines could theoretically cause viruses to mutate as these microbes are constantly evolving to avoid the immune system. However, this isn’t commonly seen with vaccines. Part of this has to do with what vaccines target. Often vaccines will target a portion of a protein that is exposed in either the virus or bacteria. With viruses, these exposed portions of the proteins are often required to interact will cells to gain entry. Because of this, mutations changing the part that enables cell entry, which vaccines can also target, could render a virus unable to enter a cell and be a dead end (the virus can't infect that host anymore). It wouldn’t matter that the vaccine cannot train the immune system to target it because the virus couldn’t enter the cells and cause disease. This type of evolutionary pressure is too difficult to overcome without changing how the virus interacts with its host. Also, with some bacterial vaccines the target isn’t the pathogen, but rather is a toxoid that the bacteria is making (such as the tetanus vaccine). That means there is no evolutionary pressure for the vaccine. Another consideration is that vaccines which cause mutation are not released as the risk of the vaccine is too high compared to the benefits from it.

Why drugs drive resistance but vaccines don’t: https://www.pnas.org/content/115/51/12878

Viral evolution: https://www.historyofvaccines.org/index.php/content/articles/viruses-and-evolution
The Red Queen hypothesis: https://en.wikipedia.org/wiki/Red_Queen_hypothesis

Toxoid vaccines are a type of subunit vaccine:





The flu vaccine gives people the flu
This is a common misconception about the flu vaccine. The injected flu vaccine cannot cause the flu because the influenza viruses in the vaccine have been chemically killed and are no longer able to infect cells. The vaccine can cause mild flu-like symptoms, but this is just the immune system being triggered. There is a live flu vaccine, given as a nasal spray, but the strains have been weakened, or attenuated, to the point that the cannot cause disease but still cause an immune response. Another reason people might mistake the flu vaccine as the cause of illness is that it takes 2 weeks for the vaccine to be effective after it is given. If you get sick with the flu within that time frame it’s from another source and not the vaccine. People also mistake bad colds or other diseases for the flu. Colds make people feel bad but they are able to function. The flu can result in hospitalization and is a much more serious illness.

Misconceptions about the flu and flu vaccines: https://www.cdc.gov/flu/about/qa/misconceptions.htm

Ten flu myths: https://www.health.harvard.edu/diseases-and-conditions/10-flu-myths
Why the flu vaccine can’t give you the flu: https://www.vaccinestoday.eu/stories/no-cannot-get-flu-vaccine-heres/
The differences between the flu and colds: https://www.cdc.gov/flu/about/qa/coldflu.htm


All vaccines shed and can get people sick
This is a very common misconception that is based on not understanding what shedding is. Simply put, shedding is when a pathogen leaves a host to infect another host. Sneezing, coughing, etc. are ways that pathogens can be shed. With vaccines, only live vaccines have the potential to be shed, but unless the pathogen has the potential to undergo a process known as reversion (the weakened vaccine strain mutating to become full infectious again), what is shed is the vaccine strain. Only a few vaccines have been seen to undergo reversion to a fully infectious pathogen: the oral polio vaccine and the smallpox vaccine are two well documented examples of this. Researchers working on vaccines are very mindful of the danger of reversion and take steps to try and prevent it. This has been one of the limiting factors for developing vaccines for a variety of diseases, such as SARS coronavirus.

Viral shedding: https://en.wikipedia.org/wiki/Viral_shedding

Computer modeling to develop better live-attenuated influenza vaccines: https://www.nature.com/articles/nbt.1636
Live-attenuated vaccines and reversion: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3314307/
How SARS-CoV vaccine candidates undergo reversion: https://journals.plos.org/plospathogens/article?id=10.1371/journal.ppat.1005215





The HPV vaccine is killing healthy kids
This is a myth that is based on taking the personal tragedies of families and twisting them for gain. The HPV vaccine doesn’t contain any live virus, but just parts of different HPV strains. The reason why this vaccine is so important is because these HPV strains cause a variety of cancers. The HPV vaccine has led to a reduction in cancer where it has been widely adopted. People think the HPV vaccine is killing kids due to a temporal association. The kids get the vaccine before they are sexually active, but then when something bad happens and the vaccine is blamed because it happened before the event. People try and make sense of tragedies, and often something is singled out as a blame. There are unscrupulous people who will prey on those that are grieving to use that story to make money. I'm not going to link to those sites, but they often sell people herbal supplements to "detox" from vaccines and/or replace vaccines.

HPV vaccine myths: https://shotofprevention.com/2013/08/20/why-some-parents-are-refusing-hpv-vaccine-for-their-children/

More information on the HPV vaccine: http://www.hpvvaccine.org.au/parents/myths-and-facts-about-hpv-and-the-vaccine.aspx
The HPV vaccine and cancer prevention: https://www.cancer.org/latest-news/hpv-vaccine-a-powerful-way-to-help-prevent-4-cancers-in-women.html
Australia is close to eliminating most HPV cancers due to the HPV vaccination program: https://www.thelancet.com/journals/lanpub/article/PIIS2468-2667(18)30183-X/fulltext


The flu vaccine isn’t necessary
This is a myth that doesn’t make sense to me, but is rooted in the gambler’s fallacy. Some people think that because something bad hasn’t happened to them for a number of years, then the risk of something bad happening is low. The seasonal flu kills hundreds of thousands of people each year. Pandemic strains of the flu kill many more people than that, especially healthy people as those strains trigger something called a cytokine storm where the immune system is over activated to the point of death. This is why the 1918 flu was so deadly to young people and killed so many. Another reason why some don’t think that the vaccine is worth getting is because of reports on vaccine effectiveness. With vaccines, effectiveness is a measure of how well a vaccine prevents initial infection. But this calculation does not account for other impacts that vaccines can have which include reduction of symptoms severity and duration if the vaccine doesn’t prevent an initial infection. For someone who is in a high risk group, getting the flu vaccine is the difference between recovering at home and needing hospitalization with the risk of death.

Gambler’s fallacy: https://en.wikipedia.org/wiki/Gambler%27s_fallacy

Influenza viruses causing cytokine storms: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4711683/
Why the 1918 influenza outbreak killed so many young people: https://www.smithsonianmag.com/history/why-did-1918-flu-kill-so-many-otherwise-healthy-young-adults-180967178/
Vaccine effectiveness: https://www.cdc.gov/flu/about/qa/vaccineeffect.htm
The flu vaccine reduces symptom severity: https://www.cdc.gov/flu/spotlights/2017-2018/vaccine-reduces-risk-severe-illness.htm
The flu vaccine can reduce symptom severity even against strains not in the vaccine: https://www.eurosurveillance.org/content/10.2807/1560-7917.ES.2018.23.43.1700732?rel=0
A recent estimation of the worldwide death rate due to influenza: https://www.cdc.gov/media/releases/2017/p1213-flu-death-estimate.html




Additional reading:

Simply put, vaccines save lives: https://www.pnas.org/content/114/16/4031
Vaccines as a proactive approach to emerging infectious diseases: https://www.pnas.org/content/114/16/4055
20 questions about vaccines: https://www.historyofvaccines.org/content/articles/top-20-questions-about-vaccination

Sunday, January 28, 2018

Stonyfield responds to consumers by calling them trolls and banning them

Recently, a dairy company that sells milk and yogurt posted a video where they asked children to recite what "GMOs" are. Naturally, the kids repeated back things that they likely heard from their parents without understanding what they were talking about (more on this in a second). The outrage from consumers was quick. People were not happy that a company was marketing their products by calling into question the safety and "wholesomeness" of their competition. Stonyfield responded to the criticisms and complaints from their customers in one of the least professional ways possible: they called them trolls and started banning them from being able to voice their complaints directly to the company on social media platforms. The Farmer's Daughter blogged about it here (the link also includes the video in question), but to make it easy, I'm attaching screenshots of what the company said about consumers who were complaining.




I normally don't comment directly to companies on social media anymore as I've found that if the company is not interested in an honest discussion, they'll just scrub comments. But in this case, I made an exception because I had hoped that maybe they'd listen if enough people responded. This is what I wrote:


After I wrote that, I went to bed and by the next day, myself and at least another 30 people who had commented during the night had their comments deleted. I was also banned from the page, so I suspect that the other people were too. So many people have been banned from the page within the last 48 hours that a group was created to document the number of real people who had been banned by Stonyfield. Members of the group range from concerned parents to farmers to scientists like myself. Many people fall into several categories (I'm both a parent and a scientist). Although Stonyfield claimed that they were interested in listening to the concerns of consumers, they have silenced anyone who disagreed with what they did. Since Stonyfield must has thought that I was a paid troll, here is a photo of my kids (it's hard to get them to sit still long enough for a good picture) and a picture of me collecting samples from a wheat field. I'm a real person who is not only a parent, but an agricultural scientist. 



I wasn't the only scientist that got banned and had their comments deleted. It also happened to Mommy, PhD and Thoughtscapism, to name just two examples of many. 

So why were people outraged enough to comment on a page that has been publicly been calling consumers trolls and deleting even the most respectful of comments? It goes back to the video that they shared. To be up front, I have no problem if people want to buy organic produce. But I absolutely do not like the marketing tactics that the organic food industry uses. Organic marketing campaigns often target conventional farming practices and are based on scare tactics and misinformation. Long time followers might have an idea on how I feel about misinformation, especially when it is used intentionally for personal gain. For people new to my page, I abhor it. If the only way that you can sell a product is to make people think a competitors product will kill them, then there is something wrong, not only with your product, but with you. This video tries to pit the views of children against scientific evidence. As parents, if we listened to children about everything, they'd be eating candy and ice cream for breakfast while watching cartoons all day. When it comes science, it's better to trust scientists. Many scientific bodies have examined the evidence for and against GE crops and they have found the technology no more risky than other breeding methods. The US National Academy of Sciences even did an exhaustive review looking at every piece of evidence, either for or against, GE crops and found the supposed risks to be based on poorly conducted science. They reached the same conclusion that other scientific bodies have for the last 15 years; genetic engineering is no more risky than other breeding techniques. So using children to state that foods made from GE crops are bad is not only misleading, it is incredibly offensive. That's why people pushed back.

But my offer to Stonyfield still stands. If you want to discuss the technology and why it's not something to be afraid of, I am still more than willing to do so. I'm a publicly funded scientist who believes strongly that one of my duties is to convey science to the public, especially since much of my education was paid for by the public through grants and student loans. This is one of the ways that I see myself repaying the opportunities that I've been given. So Stonyfield if you are interested, let me introduce to the wonders of modern agricultural technology and how it could be used to improve organic production systems. 



Monday, September 25, 2017

Does a recent paper by Shaw really show aluminum adjuvants cause autism?

Note: This is a special blog post coauthored by The Mad Virologist and The Blood-Brain Barrier Scientist (this article will be co-published on both our blogs). Another post has already been published on this paper, but we wanted to take a deeper look at everything that is wrong with this paper.


A recent paper by ophthalmologist Chris Shaw was published and immediately touted as being proof positive that the aluminum adjuvants found in some vaccines are responsible for causing autism. Before we get into the paper, I have a few choice things to say about Chris Shaw. Despite not being an immunologist, Shaw has ventured into studying how vaccines and vaccine adjuvants cause neurological disorders such as autism. Shaw made headlines in 2016 when a paper he co-authored that claimed to show a link between the HPV vaccine and neurological disorders was retracted after being accepted by the journal Vaccine. It turns out that the statistics used in the paper were completely inappropriate and there were undisclosed conflicts of interests for some of the authors, including Shaw.These issues should have prevented the paper from being accepted in the first place, but mistakes do happen and science tends  to be self correcting. More surprising is that Shaw claimed that he didn’t know why the paper was retracted and that the science was of the highest quality. Shaw’s previous work has also been described by the WHO as deeply flawed and rejected by that body. This isn’t being brought up to dismiss the paper out of hand but to help illustrate why Shaw’s work is deserving of additional scrutiny. Hopefully by the end of this post, the logic behind the need for additional scrutiny of anything Shaw publishes is abundantly clear. We’ll begin by examining the methods used by Shaw’s research group and point out some of the issues.
Background for experimental design flaws: PK and species issues
One problem that is recurrent with Shaw is his “vaccination schedule” tries to consider rodents, such as mice and rats, as humans in miniature. It is wrong to assume that rodent and human primate species are alike, they’re not and there are notable physiological differences between rodents and non-rodents. For example, there are a couple of studies by Terasaki and colleagues that have shown differences in the expression of solute carriers and drug transporters at the blood-brain barrier. We cannot exclude that such differences may bias the outcome observed in his studies, but this bias applies intrinsically to any in vivo studies based on a rodent model.

There is also the issue of brain development and mapping the vaccination schedule and the brain maturation. In this study (as well in the previous ones), Shaw and colleagues consider that applying vaccines from post-natal day (PND) 3 to 12 is representative of a human infant vaccine schedule. There is some differences in the literature, as previous studies from Clancy and colleagues mapped the PND12 to the 7th gestational months in humans, some more recent publications map PND21 to 6th month post natal in humans, making the PND12 around the 3rd month infancy following full-term birth. You can easily appreciate that by following Shaw flawed experimental design, the total amount of Al administered during a 2 year period has been indeed administered within 90 days of birth, whereas the vaccination schedule according to the CDC does not start before the 2nd month of infancy if we exclude the two injections of Hepatitis B vaccines at birth and after the first month respectively.
In addition to a flaw in the experimental design, we cannot exclude some differences in the pharmacokinetic profile of Al adjuvants between mice and humans. The data available is fairly limited but a recent study from Kim and colleagues failed to show a significant brain uptake of Al compared to controls following the single oral administration of different Al oxide nanoparticles at a concentration of 10mg/kg. Furthermore, the approximation of Shaw in terms of total burden of Al from vaccines (550 microg/kg) is not an accurate metric as we have a dynamic process involving absorption, distribution and elimination to occur simultaneously. A daily burden of Al from vaccines is a much more reliable parameter to consider. Yokel and McNamara established it about 1.4-8 microg/day for based on 20 injections spanning over a 6-year period in a 20kgs individual.

If we consider Shaw calculation, then the total burden at age 6 would be 1650 microg/kg or 33’000 microg for a 20kgs 6-year old child. That’s about 15 microg/day of daily Al burden from vaccines, a value that is 2 to 10 folds higher than applied to humans. It makes therefore very difficult to compare apples to oranges, as Shaw experimental paradigm is flawed and not representative of a clinical scenario.


Selection of genes to measure:
Selecting which genes to measure is a crucial step in a study like this. If care is not given to ensure that the correct genes are selected, then the study will be a wasted effort. Shaw stated in the paper that they selected genes that were previously published. However, not all of the genes that they measured came from this paper. Only 14 of the genes were from this paper (KLK1, NFKBIB, NFKBIE, SFTPB, C2, CCL2, CEBPB, IFNG, LTB, MMP9, TNFα, SELE, SERPINE1, and STAT4). This leaves 17 genes the were measured but not found in the paper. Two of these can be explained. One gene, ACHE, was mentioned as having been selected because of other work, so it is sourced. The second gene, is the internal control gene beta-actin. This is a housekeeping gene that is often used as an internal control to provide a relative expression from. This leaves 15 genes unaccounted for. We suspect that these genes were selected because they are involved in the innate immune response, but no reason is stated in the paper.


The way these genes were selected is problematic. Because half of the genes seemed to be selected for uncited reasons, this study is what is known in science as a “fishing expedition.” There’s nothing inherently wrong with this type of research and indeed it can lead to new discoveries that expand our understanding of the natural world (this study that increased the number of sequenced viral genomes by nearly tenfold is a good example of this). But what fishing expeditions can show is limited. These types of studies can lead to other studies but they do not show causality. Shaw is claiming causality with his fishing expedition here.


There is also the problem that they used old literature to select their gene targets when much more recent research has been done. By happenstance, they did measure some of these same genes in their study. However, their results do not match has has been measured in children that have been diagnosed with autism. For example, RANTES was shown to be decreased in children with autism. In Shaw’s work there was no statistical difference in RANTES expression between mice given the aluminum treatment and those receiving saline. Likewise, MIP1alpha  was shown to be decreased in developmentally delayed children but was shown to be increased in the aluminum treated mice. This was also the case for ILIb which was found to be elevated in children with moderate autism yet there was no statistical difference between the mice receiving the aluminum treatment and those receiving saline. In fact IL-4 was the only gene to follow an expression pattern similar to what was found in children with severe autism (elevated in both cases). However, there is something odd with the gel in this case. This was the image for figure 4 that was included in the online version of the paper (we have not altered the image in any way). Look closely at the top right panel at the IL-4 samples and the IL-6 samples. You’ll notice that the bands for the control and the aluminum treated mice have different color backgrounds (We enlarged the image to help highlight this but did not adjust the contrast). If these came from the same gel, there would not be a shift in color like this where the treated bands have a lighter color encircling them. The only way this could happen is if the gel was assembled in photoshop. The differences could be real; however, since this image was modified we do not know for sure and this is scientific misconduct. Papers get retracted for this all the time and people have lost their degrees for doing this in their dissertations. These gel results cannot be trusted and the paper hinges on them. The Western blots and issues with them will be discussed below.


1-s2.0-S0162013417300417-gr4.jpg
The unaltered figure 4.

zoomed in image.png
A close up of the panel with the regions in question highlighted.

Semi-quantitative RT-PCR:
In order to quantify the gene expression levels of the genes that Shaw’s group selected, they used an older technique called semi-quantitative RT-PCR. This technique uses the exponential increase in PCR products in order to show differences between expression of a gene under different conditions. There’s nothing wrong with the technique provided one understands what the limitations are. Let’s say you have a large number of genes that you want to measure expression of, but you aren’t sure which genes are going to be responsive and you have limited funds. Semi-quantitative RT-PCR is a good method to screen for specific genes to be examined further by more precise techniques, such as Real-Time RT-PCR, but it’s not appropriate to use this technique and then make statements about precise quantification. Where semi-quantitative RT-PCR excels is with genes that are normally not expressed but can be expressed after some sort of stimulus, such as terpene biosynthesis genes that are induced by insect feeding.


To put it bluntly, semi-quantitative RT-PCR was not used properly in the paper by Shaw. The way that it was used implied that it would be quantitative when the technique is not that precise. Without verification by another method, ideally Real-Time PCR which can determine what the exact abundance of a given target is, these results should be taken with a grain of salt. This would still be the case if there weren’t irregularities in the gel images. With those irregularities, this is absolutely essential and should have prevented this paper from being accepted.


Western-blots and data manipulation: the owl is not what it seems

For Western-blots, the semi-quantitative approach is more accepted but it is important to show data that are consistent between what you show (qualitative) from what you count (quantitative). In Western-blot analysis, we measure the relative darkness of a protein band (the black lines that you see in papers) between treatments and controls. Because you cannot exclude some errors due to the amount of protein loading, we also measure the band intensity for proteins that are very abundant, usually referred as housekeeping proteins (because they play essential functions in cells). In this case, beta-actin (named ACT in the paper was used).

Once you normalize to beta-actin, you can compare the effect of a treatment by comparing the relative band intensity ratios. In both cases (semi-quantitative PCR and Western-blots), “what you see is what you measure” or you have to show a “representative Western-blot” alongside a quantitative data to demonstrate that your quantification matches with band densities. The common practice is the use of image acquisition software like ImageJ to determine band density. Showing Western-blot is nice, but not foolproof. Indeed, Western-blots data (with fluorescence images) is amongst the most common method by which some researchers can manipulate or even falsify data but also the most common type of data that spark a paper retraction. Someone notice something fuzzy on a Western-blot data, creating some questioning reaching to the editors and asking access to the full dataset (usually the X-ray film or the original full scan of the blot). Often, the author will use the excuse “the dog ate the flash drive” or “the hard drive containing the data crashed” if they cannot provide such data.

There are some methods to spot some image manipulation on Western-Blots and include playing with the brightness/contrast, requesting the presence of quantitative data in addition of a representative blot, samples must be coming from a same gel (you cannot use a cookie-cutter and build-your-own perfect gel). There is an excellent article that describe the pitfalls and cases of bad Western-blot data representation if not image manipulation.
There are at this time, different issues raised both in the Western-blots pictures and their subsequent analysis raising the reliability of the data presented in this study. In this post, we have used the full-resolution pictures provided by the journal website, opened just pictures in ImageJ to convert such pictures into 8-bit format, invert the lookup tables (LUT) and adjusted the brightness and contrast. We have exported such pictures in Powerpoint to ease the annotation and comments. We recommend the reader to judge by himself/herself and download the full-resolution images as well.


The first concern is by looking at Figure 1C. First, this is the original Fig.1.

The original Figure 1.


Then, this is the close-up analysis for Fig.1C
An examination of irregularities in Figure 1c.

There are several issues. First there are some bands that appears as band splicings, in which the author create a custom blots by assembling different bands from different gels. This is a no-no in Western-blots: all bands showed in a blot should come from the same gel. This is why Western-blot is a torture for graduates students and postdocs, you need to show your best blot with all bands showing the same behavior for your quantitative analysis.

Second, the presence of a rectangular grey piece that was added on the top of control 3 TNF band. This is a possible data manipulation and fraud, as you are voluntary masking a band and hiding it. Thats a big red flag on the paper. The third issue of Fig.1C is the consistent feeling of seeing bands either cropped on a grey rectangle or what I call a “Photoshop brushing” in which you brush off using the brush function area of the gel you consider not looking good enough. You can clearly see it with actin as we have a clear line between the blurred blot and a sharp and uniform grey in the bottom half of the blot, compared to the wavy top of the blot. This a grey area that I am not familiar with Western-blot but this is a no-no for any immunofluorescence picture. Any image manipulation that goes beyond the brightness/contrast adjustment and involves alteration of the acquired picture is considered as data manipulation. If you analyze the data upon correcting for the inconsistency of Figure 1C, the graph looks much more different and failed to show any differences between Al-treated and control, when you restrict yourself in over-normalizing it and plot straight the protein/actin band density ratios.

What is also concerning and surprising is the conclusion from the authors that males, not females, showed an inflammatory response. Of course, the authors failed to show the same outcomes from female animals and expect us to trust them on this. The problem is that such conclusion is in direct contradiction with the literature. There is a solid literature supporting the presence of a sexual dimorphism in terms of inflammatory response, in particular in terms of neuroinflammation and autoimmune disorders such as multiple sclerosis. There is also a growing call to the scientific community to provide results for both sexes (males and females alike). Although Shaw reports the study was performed in both males and females, he gives us this explanation at the end of section 3.1:

Taken together, a number of changes indicative of the activation of the immune-mediated NF-κB pathway were observed in both male and female mice brains as a result of Al-injection, although females seemed to be less susceptible than males as fewer genes were found altered in female brains.

Yet the interesting part comes when Shaw try to compare ikB phosphorylation between males and females following Al injection (Fig.3C). When you analyze the data, you are raising concerns very rapidly. First, we have a possible case of cookie-cutter band in which you just paste a band that seems nice enough in a blank space. This is a very suspicious activity as you can make up data as easy as this. Second, there is again this “Photoshopping brushing/erasing” taking place in that figure, in which I suspect a case of fraudulent activity. As you can see in female, it is as if someone tried to mask some bands that should not have been here. Remember when he said that males but not females showed an inflammatory response? Is it trying to dissimulate data that contradict his claims?

The original Figure 3.

Again, lets bring up Figure 3 at its full resolution.
An close up of some of the irregularities in Figure 3.

Finally, the same issues are persistent and even more obvious in Fig.5A. Again, we have a mixture of different Western-blots image manipulations including bands splicing, Photoshop brushing, cookie-cutter bands......
First, the unedited picture:
The original Figure 5.


And below the close up of Fig.5A
An examination of some of the irregularities in Figure 5c.


These are some serious concerns that raise the credibility of this study and can only be addressed by providing a full-resolution (300 dpi) of the original blots (X-ray films or the original picture file generated by the gel acquisition camera).


There has been a lot of chatter on PubPeer discussing this paper and many duplicated bands and other irregularities have been identified by the users there. If anyone is unsure of how accurate the results are, we strongly suggest looking at what has been identified on PubPeer as it suggests that the results are not entirely accurate and until the original gels and Western blots have been provided, it looks like the results were manufactured in Photoshop.


Statistics:
Long time followers know that I tend to go right to the statistics that are used in papers to see if what they are claiming is reasonable or not. Poor use of statistics has been the downfall of many scientists, even if they are making honest mistakes. It’s a common problem that scientists have to be wary of. One easy solution is to consult with a statistician before submitting a paper for publication. These experts can help point out if the statistical tests that were run are the correct or not. The Shaw paper could have benefited from this expertise. They used a Student’s T Test for all of their statistics comparing the control to the aluminum treated. This is problematic for a couple of reasons. These aren’t independent tests and the data likely does not have a normal distribution, so a T Test isn’t appropriate. Better statistical tests would have been either Hotelling's T-squared distribution or Tukey’s HSD.

Another issue is how the authors used standard error (SE) instead of standard deviation (SD). To understand why this matters, it helps to understand what the SE and what the SD measure and what these statistics show. The SD measures the variation in samples and how far the measurements are from the mean of the measurements. A smaller SD means that there is low variability in the measurements. The SE measures the likelihood that a measurement varies from the mean of the measurements within a population. Both the SE and SD can be used; however, using the SE is not always appropriate, especially if you are trying to use it as a descriptive statistic (in other words if you are trying to summarize data). Simply put, the SE is an estimation and only shows the variation between the sample mean and the population mean. If you are trying to show descriptive statistics, then you need to use the SD. The misuse of SE when the SD needs to be shown is a common mistake in many research publications. In fact, this is what the GraphPad manual has to say about when to use the SD and when to use the SE:

If you want to create persuasive propaganda:
If your goal is to emphasize small and unimportant differences in your data, show your error bars as SEM,  and hope that your readers think they are SD
If our goal is to cover-up large differences, show the error bars as the standard deviations for the groups, and hope that your readers think they are a standard errors.
This approach was advocated by Steve Simon in his excellent weblog. Of course he meant it as a joke. If you don't understand the joke, review  the differences between SD and SEM.”
The bottom line is that there is an appropriate time to use the SE but not when you are trying to summarize data.


Another issue is the number of animals used in the study. A consensus in published study is to provide a minimal number of animals (usually n=8) needed to achieve statistical significance but also maintain to a minimum to ensure proper welfare and humane consideration for lab animals. In this study, such number is half (n=5). Also the authors are bringing some confusion by blurring the lines between biological replicates (n=5) and technical replicates (n=3). By definition, biological replicates are different organisms that are measured and are essential for statistical analysis as these replicates are independent from each other. Technical replicates are dependent on each other as they come from the same biological samples and are repeated measurements. By considering the latter as statistical relevant, you are biasing yourself to consider a fluke as a biological phenomenon.


Conclusions:
Based on the methods that were used in this paper, Shaw et al. went too far in declaring that aluminum adjuvants cause autism. But there are six other key points that limit what conclusions can be drawn from this paper:
1) They selected genes based on old literature and ignored newer publications.
2) The method for PCR quantification is imprecise and cannot be used as an absolute quantification of expression of the selected genes.
3) They used inappropriate statistical tests that are more prone to giving significant results which is possibly why they were selected.
4) Their dosing regime for the mice makes assumptions on the development of mice that are not correct.
5) They gave the mice far more aluminum sooner than the vaccine schedule exposes children to.
6) There are irregularities in both the semi-quantitative RT-PCR and Western blot data that strongly suggests that these images were fabricated. This is probably the most damning thing about the paper. If the data were manipulated and images fabricated, then the paper needs to be retracted and UBC needs to do an investigation into research misconduct by the Shaw lab.

Taken together, we cannot trust Shaw’s work here and if we were the people funding this work, we’d be incredibly ticked off because they just threw away money that could have done some good but was instead wasted frivolously. Maybe there's a benign explanation for the irregularities that we've observed, but until these concerns are addressed this paper cannot be trusted.
Here is a dropbox link for the raw figures shown in this post as well as out analyses for these figures.