Q&A – The Lowdown on GMOs With A Biotech Firm

Arctic Apples

Greetings and salutations my fellow readers. It’s been a bit of a roller coaster ride publishing the last two posts on GMOs, so I thought to myself, where should I go next? Dive further into the rabbit hole (making myself ever more unpopular), or switch topics? I have an interview with a scientist, check! With a farmer, check! Biotech firm? Bingo! An opportunity thus presented itself, so down I went further down the rabbit hole.

So, to round out—and conclude—my trifecta (or triumvirate—a much cooler word that makes me sound smarter than I am) of posts about GMO, I have just finished up an email Q&A with the CEO and founder of Okanagan Specialty Fruits (OSF), Neal Carter, whose company makes Arctic Apples (apples that don’t brown). In my two previous Q&As— with a scientist here and with a family farmer here—I had commentary and concluding thoughts; this time, I prefer to let their positions stand on its own two feet, as it is more than capable of.

Do note, however. I am not trying to convince anyone to not eat organic food, or to eat GMO food, so don’t get your knickers in a twist.


1) What prompted your company to create a GM nonbrowning apple? Why not, for example, try to do the same with hybridization?

Our motivation for developing biotech apples, and all our other projects under development, is to introduce value-added traits that will benefit the tree-fruit industry. We have chosen to focus specifically on nonbrowning Arctic® apples as our flagship project for a number of reasons. One of the chief ones is that apple consumption has been flat-to-declining for the past two decades and we are confident the nonbrowning apple trait can create a consumption trigger while also reducing food waste throughout the supply chain.Neal Carter

Another key motivation is ever-increasing demand for convenience. Arctic apples are ideally suited for the freshcut market, which is expensive to enter because of the browning issue. We often refer to the consumption trigger that convenient “baby” carrots created – they now make up 2/3rds of all U.S. carrot sales!

As for why we use biotechnology to achieve this, it’s because we knew we could make a comparatively minor change safely, relatively quickly, and precisely. We silence only four genes, specifically, the ones that produce polyphenol oxidase, which is the enzyme that drives the browning process. We do so primarily through the use of other apple genes, and no new proteins are created. If we were to attempt to breed this trait conventionally, we could easily spend decades trying with no guarantee of success.

2) What benefits will the Arctic apple bring to the food market? Are there quantitative studies that can predict how effective it could be?

In addition to addressing stagnant apple consumption and tapping into the underutilized freshcut and foodservice markets, Arctic apples offer plenty of other benefits throughout the supply chain.

For growers and packers, nonbrowning apples can help significantly reduce the huge number of apples that never make it to market because of minor superficial marks such as finger bruising and bin rubs. So much of the food produced today is wasted purely for cosmetic reasons. This extends to retail where the nonbrowning trait can have a big impact on shrinkage and making displays more attractive while also offering exciting new value-added apple products.

Consumers will also benefit from throwing away far less fruit at home – how many apples get bruised up on the way back from the grocery store or in kids’ lunchboxes? Our goal is helping consumers, especially kids, eat healthier and waste less food. Last year, one grade 2 teacher wrote about how excited she is for nonbrowning apples, explaining she sees countless perfectly good apples and apple slices thrown out by her students due to minor browning and bruising. Consumers will also enjoy other tangible benefits like new opportunities for cut apples in many cooking applications.

As for quantifiable evidence showing the value of these benefits, food waste has been a major issue over the past year with recent estimates from the UN’s Food and Agriculture Organization suggesting around one-third of food produced is wasted. The numbers are even worse for fruit, where around half of what’s produced never ends up getting eaten.

As far as the potential to create a consumption trigger, the produce industry is full of examples of how making fruit more convenient, especially for the foodservice industry, results in huge consumption boosts. We mentioned how baby carrots now make up two-thirds of carrot sales and reports tracking major fruit and vegetable consumption trends frequently emphasize convenience. One example explains one of the most prominent, ongoing trends “is a consumer demand for foods of high and predictable quality that offer convenience and variety.” Arctic apples satisfy all these requirements.

For apples, specifically, there’s lots of attention given to how various chemical treatments can slow browning and plenty of attempts to conventionally breed low browning varieties (though this is quite different from being truly nonbrowning). For instance, a notable 2009 publication from the Journal of Food Engineering discusses how “the market for fresh-cut apples is projected to continue to grow as consumers demand fresh, convenient and nutritious snacks”. Yet it also explains that the “industry is still hampered by-product quality deterioration” because when “the cut surface turns brown; it reduces not only the visual quality but also results in undesirable changes in flavour and loss of nutrients, due to enzymatic browning.” Again, Arctic apples address these issues.

Finally, some of the most convincing evidence that the nonbrowning traits will provide substantial value – both apple producers and consumers have told us so! In 2006/07 we surveyed a number of apple industry executives, 76% of whom told us they were interested in Arctic apples. In focus groups, we have found that over 80% are positively interested in Arctic apples and 100% of participants wanted to try them. Even more encouraging, when we surveyed 1,000 self identified apple eaters in 2011, we found that their likelihood to buy Arctic apples continued to increase the more they learned about the science behind them!

3) How many, and how intensive, were the studies performed to show Arctic apples are as safe as other apples? Were the studies peer-reviewed? If so, by whom? (You may wish to discuss what was and/or wasn’t changed.)

Before getting into the specifics, it’s important to put things in perspective to show how rigorous the review truly is; particularly arduous for a small, resource-tight company like ours: (See timeline)

Arctic Apple Timeline

So Arctic apples, our very first project, still haven’t been commercialized 17 years after we were founded and over a decade after we proved the technology and planted them! That means we now have over ten years of real-world evidence that Arctic trees grow, respond to pest and disease pressure, flower, and fruit just as conventional trees do.

Over this time, our apples have likely become one of the most tested fruits in existence. This makes detailing all of the specific tests impossible here, but we encourage anyone interested to view our extensive, 163-page petition on the USDA’s website, which provides full details.

Quickly highlighting some of the key ones:

These tests were performed by a variety of reputable groups and individuals, some third-party, some in-house. Our field trials were monitored and data was collected by independent horticultural consultants and an Integrated Pest Management specialist.

Of particular importance is the fact that there are no proteins in Arctic fruit that aren’t in all apples. This shows there’s nothing “new” in our apples that will affect consumers. This is expected as we silence the genes that cause browning, rather than introduce new attributes. To give an idea of how sophisticated the tests used to prove this are, they would be able to detect a single penny amongst 100-250 ton coal-sized rail cars! We are confident Arctic apples are safe, and soon, we anticipate FDA’s confirmation of this.

So what has all of this extensive testing taught us? Exactly what we thought it would – Arctic trees and fruits are just the same as their conventional counterparts until you bite, slice or bruise the fruit!

4) Can you name a few of the misconceptions — if any — that people associate your company with, or accuse your company of, when they find out you’re a biotech company? If there are misconceptions, why are they wrong or miss the big picture?

Absolutely – just as there are countless misconceptions about biotech foods in general, there are also plenty of myths about our company and Arctic apples. In fact, one of our most popular blog posts ever is titled “Addressing common misconceptions of Arctic orchards and fruit”.

We invite readers to visit that post and explore our site in general for more details, but the two most common misconceptions about Arctic apples are:

  1. Arctic apples will cross-pollinate with other orchards, causing organic orchards to lose organic certification: No organic crop has ever been decertified from inadvertent pollen gene flow. Even if pollen from an Arctic flower did pollinate an organic or conventional fruit, the resulting fruit is the same as the mother flower….not that of the pollen donor. Additionally, we are implementing numerous stewardship standards to ensure cross-pollination won’t occur, including buffer rows, bee-hive placement, and restricting distance from other orchards.
  2. Because Arctic apples don’t brown, they will disguise old/damaged fruit: The opposite is true! Arctic apples won’t experience enzymatic browning (which occurs when even slightly damaged cells are exposed to air), but the decomposition that comes from fungi, bacteria and/or rotting will be just the same as conventional apples. This means that you will not see superficial damage, but you will see a change in appearance when the true quality is impacted.

Other accusations we hear somewhat frequently from a vocal minority who oppose all biotech foods are “we don’t know what the effects will be down the road” or that we’re “messing with God/Mother Nature”. Regarding the first claim, the science tools we now have are truly amazing and we have an unprecedented level of precision, control and analysis when developing biotech crops. They must be meticulously reviewed before approval and around three trillion meals with biotech ingredients have now been consumed without incident. As to the messing with God/nature charges, biotech-enhanced crops are really just one more advancement in a long history of human-driven food improvements – and even the Amish and the Vatican support these advances!

5) As an insider, you are privy to the goings-on and workings of the biotech industry, what do you envision the future of biotech to be? What new seeds are coming down the line and what potential advantages or disadvantages might they bring?

We foresee biotech continuing to be the most rapidly adopted crop technology ever, as it has been for the past 17 years. We also anticipate already realized benefits from biotech crops to continue, such as those highlighted by a fifteen year study including increased net earnings of $78.4 billion for farmers (mostly from developing nations), a reduction of 438 million kg of pesticide spraying and the equivalent reduction in greenhouse gas emissions as removing 8.6 million cars from the road for a year. Two major categories in particular where we’ll see further advancements are in environmental sustainability (reduced pesticide use, carbon emissions, food waste) and higher crop yields under adverse conditions (from pest resistance, drought-tolerance, etc.).

Another major trend you’ll see is the increased presence of biotech foods with direct consumer benefits, particularly nutrition. We will see many new projects following in the footsteps of crops like Golden Rice, which is fortified with beta-carotene; a precursor to Vitamin A. The World Health Organization has identified that around 250 million children under the age of 5 are affected by Vitamin A deficiency, which can cause blindness and death. Biotech crops like Golden rice can potentially save millions of lives by helping address this, and efforts are already underway to produce other Vitamin A enhanced crops including bananas and cassava.

This is just the tip of the iceberg, though, as there are many other exciting developments on the way including many other nutrient-enhancements for cassava, numerous drought-resistant crops, blight-resistant potatoes and many more. I actually highlighted some of these crops in a TEDx talk I gave in October 2012 on the value of agricultural biotechnology, which is available to watch online.

6) As a biotech company, do you bear the brunt of the anti-GMO backlash nominally directed at Monsanto and DuPont? If so, how has this affected you? Please be specific.

All companies who develop biotech crops have to deal with a certain level of backlash from the vocal, emotional minority who oppose biotechnology.

We are quite unique because when consumers discuss biotech companies, names like Monsanto and DuPont, as you mention, are the first ones that come to mind, rarely small companies like ours. Using Monsanto as an example, they have approximately 22,000 employees – we have 7. Because most organizations in this industry are pretty massive, they do get the lion’s share of attention. That being said, if we were to create a ratio of media attention to company size; ours would be through the roof!

One key reason we likely get more than our fair share of attention is that we’re dealing with apples. When we’re talking about something as popular and iconic as the apple (e.g., “an apple a day”, “American as apple pie”), it’s going to get people emotionally charged. Genetically, our enhancement is relatively minor compared to the majority of crops out there; yet even so, when our petition was available for public comment along with 9 other biotech crops in the U.S., we received around three times as many comments as all 9 of the other petitions combined!

In terms of how all this attention affects us, we can dictate that to some extent. On one hand, we could simply choose to ignore it. The review process is evidence-based (and rightfully so!), meaning we could keep our heads down and let the science speak for itself and not worry about what people are saying. That’s not how we operate, however, as we believe in the benefits and safety far too much to keep quiet. We want to do our best to make sure accurate, evidence-based information is out there to counter-balance all the myths and misinformation. This may mean that we spend more time and resources on education than others might, but it’s too important of an issue not to.

We’ve made a concerted effort so transparency is the core of our identity. We know we have a safe, beneficial product and we’re happy to explain the truth around previously mentioned misconceptions. We make it a priority, no matter how busy things get, to keep active on Twitter, Facebook, LinkedIn, maintain a weekly blog, make timely site updates, respond to every single sincere email we get and invest in delivering presentation such as last year’s TEDx talk. (Embedded below.)

We believe everyone in the science and agricultural industries have a responsibility to help educate the public on the facts of biotechnology. Sometimes that results in more backlash, but it’s worth it.

7) Some scientists state that the anti-GMO backlash has cemented Monsanto’s grip upon the market because only they can afford the regulatory burden, do you find this to be true in your experience? And how does this affect the greater biotechnology field?

Well, we’ve touched on how rigorous the review process is and how much smaller we are than the big industry players, so yes, it is tough for smaller companies to bring a biotech crop to market. It’s challenging to raise funds, produce needed data, spend the resources providing education, and it’s just a much bigger overall risk.

While the regulatory burden is heavier for small biotech companies, I think we’re an example that it’s still possible for the little guys to make it through, but it’s not easy. Not only do you have to successfully develop a fantastic product, but you must be focused, persistent and very patient. There is no rushing the review process, but here we are a decade after first planting Arctic trees and we expect to achieve deregulation in the U.S. later this year.

Even though we’re helping demonstrate it’s possible for small companies to commercialize a biotech crop, the high regulatory burden certainly does affect the industry as a whole. With such an intimidating outlook in terms of high investment, both in time and resources, there will obviously be far less small, entrepreneurial companies than would be ideal. In a field in which innovation should be embraced as much as possible, we are missing out on many potential innovative companies and value-added products because the barriers are so high.

Really, what it comes down to is the regulatory process is (and should be) extremely rigorous, but it is indeed possible for companies that aren’t multinationals to accomplish commercialization. Ideally, once biotech crops add further to their exemplary track record of safety and benefits and the scientific tools continue to improve; these barriers will gradually be lessened.

8) Lastly, what is your relationship to the government and governmental agencies. It has been alleged that agencies like the FDA are in the pocket of big biotech organizations and are willing to look the other way. Do you find any truth in those statements? If not, why not?

If we had to select one word to describe the multiple regulatory bodies we’ve dealt with over the past few years (USDA, APHIS, FDA, CFIA) it would be “thorough”. There’s certainly no looking the other way and nothing casual about the review process. If these government agencies were in the pocket of biotech companies, we wouldn’t still be awaiting deregulation more than ten years after we first developed Arctic apples!

Some people will see that some of the agencies have former members of biotech companies and immediately distrust the whole system; this misses the point. Of course they will have some former industry employees. These companies have thousands and thousands of employees and plenty of them are well-credentialed with first-hand experience in multiple facets of agriculture. In most fields, movement between private and public spheres is common, and most working aged citizens will have at least 10 different jobs before they turn 50. Some overlap is inevitable.

The truth is, you will hear a very wide range of arguments from those who don’t like biotech crops and this is just another one on that list. Luckily, there is more than enough evidence to show that biotech crops are indeed safe and beneficial, including over 600 peer-reviewed studies, around one-third of which are independently funded. The best advice we can give to consumers is to do their own research, but always with a close eye on the credentials and reputability of the sources!

For more information on OSF or Arctic apples, please visit www.arcticapples.com


Neal Carter is the CEO and founder of OSF. Thank you for your time Neal. I am, well, me; a curious fellow trying to make sense of the world (and I just released the 2nd edition of Random Rationality: A Rational Guide to an Irrational World for Kindle). It’s working out so far, and quite fun too.

So, would you eat an Arctic Apple?

Q&A – The Lowdown on GMOs With A Family Farmer

thefarmerslife.com
In reading about GMOs in the last several years, I also read lots of reports about how farmers are disadvantaged, slaves to Monsanto, and for the most part, I blindly accepted them. But I had never heard from a farmer before. It was time to change that. It occurred to me recently that we live in the (mostly) free-market. The Big Ag BioTech companies can’t force people to buy their products, they have to convince them; with results, with cost-savings, or whatever else that a farmer needs that I know nothing about. The 95% of GM acreage in America isn’t a Monsanto empire, the farms bought into it not because they were forced to, but because they saw a benefit in it, and they keep buying the seeds not because they are obligated to, but because they still see benefits. On my last post when I interviewed a molecular biologist, Brian Scott (his photo is the featured image), a fourth generation family farmer, was kind enough to let me ask questions about how he farms and why he uses biotech seeds, and what specifically was his relationship to Monsanto from whom he buys some of his seed types. I wanted to know what really happens between a farmer and the evil company everybody talks about, and not hear about it from activists who’ve probably never set foot on a farm. While this is only one story from one farmer, it is enlightening. Also, do check out his blog, The Farmers Life, where he blogs about running his farm.

Fourat (Me) – Why do you use GMOs?

Brian –  I like to call GMO a tool in my toolbox. Biotech isn’t a silver bullet for every problem, but it’s still a powerful tool. We use traits like Bt and Roundup Ready (RR) on many of

thefarmerslife.comour acres, but not all of them.  All our soybeans are generally RR, while only some of our corn carries that trait. Popcorn and wheat, our other crops, are not available in GMO varieties. Some of our corn acres are dedicated to waxy corn production, and we generally don’t buy them as RR.  Built in insect resistance in Bt corn along with seed treatments mean it’s a very rare event that we have to treat a crop in season for pests.  That means we prevent soil compaction by keeping another piece of equipment out of the field. It also means a sprayer doesn’t need to filled with water, fuel, and pesticide which is good for the earth and the wallet.

Me – What incentives are there for using GMOs?

Brian – There can be incentives such as buying traited crops and certain chemistry (herbicide, etc) as a bundle to receive price discounts. Some crop insurance plans also offer a biotechnology discount. I think that says a lot about the effectiveness of GMO. If an insurance company is willing to give you a discount, they must believe those crops lead to less crop insurance claims.

Me – As many activists allege, are you a slave to Monsanto once you sign their contract?

Brian – I’m certainly not beholden to any seed company. I can plant what I want and manage it how I see fit. Do I sign an agreement that stipulates certain things when I buy patented seeds? Yes. Do patents only apply to biotechnology? No. These agreements are not nearly as binding as people would lead you to believe. The most viewed post I’ve put online is an outline of my 2011 Monsanto Technology Use Agreement. In the post I break down the line items in my own words, but I also provide the reader with a scanned copy of the agreement pulled straight from my filing cabinet. This allows anyone to read the agreement for themselves. In short, if I buy seed from Monsanto, Pioneer, etc nothing binds me into buying seed from them the following season. Nothing says I have to use their brand of herbicides or insecticides. Believe what you will about farmers being slaves to seed companies, but you’ve got to talk to a farmer before your mind is set in stone. My post can be found here. (Fourat: Definitely a worthwhile read.)

Me – Do you think you should be able to reuse the seeds you purchase from Monsanto? If not, why not?

Brian – That’s a tough question. For my purposes, if I wanted to save seed it would be soybean seed. All of our corn is hybrid corn. Hybrids don’t necessarily produce seed identical to the parent plant. Therefore, planting that seed the next season would give you an unknown result.  Soybeans self-pollinate so they remain true to themselves genetically. If I saved seed I would need to take a little extra care and expense to clean and possibly apply seed treatments to protect young seedlings. Right now my view is that of a division of labor. Farmers are great at producing high quality and high quantities of crops. The seed companies have the know how and resources to breed great plants. I think that’s a great combination for success. I’m not saying farmers couldn’t develop their own seed. Successful farmers are some of the smartest people I know, and can do anything if they choose to. [Fourat: I’d never thought about it this way. Farmers can save time and money by not having to clean and protect the next crops seeds. Funny how simple things evade the mind to those of us not actually involved in the industry.]

I also believe since it takes several years and millions if not billions of dollars to bring an innovative new variety to market, that any breeder large or small should be entitled to benefit financially from said variety for some period of time via a patent system.

Me – What is the most glaring factual error, if any, made by activists when discussing GMO seeds?

Brian – I often ask people what they think about crops that produce their own chemical defenses naturally, and I find a good number of people aren’t aware that some crops do this.  For example cereal rye has an ability to suppress weeds. This quality is called allelopathy. Many plants are naturally resistant to herbicides. Think about your lawn. Spraying 2,4D on your grass to kill dandelions and other weeds won’t harm your lawn. Grasses, which include corn and wheat, have a natural tolerance to that chemistry. Biotech may be allowing plants to do new things, but we are really just mimicking something nature has already shown us is possible.

I see many people say that seeds are soaked in glyphosate which is the active ingredient in Roundup. I’m not really sure where that idea comes from, but seeds are not somehow filled with herbicide. I think it’s possible people are confusing herbicides and insecticides thinking Bt and Roundup are the same thing. Bt traits protect crops like corn and cotton from pests like European corn borer.

Another fallacy is that GMO crops failed in the drought of 2012.  As if somehow during the worst drought since 1988 or maybe even the Dust Bowl era nature was supposed to give us a normal yield because our crops are able to protect themselves from pests and be resistant to certain herbicides. Drought tolerant varieties of corn were not widely available to growers in 2012. I’ve grown Pioneer’s version of drought tolerant corn in a test plot. It beat everything else in the plot hands down. Wide availability of drought tolerant corn varieties will spread in the next year or two. Drought tolerance and water use efficiency could be game changers for water use in the highly irrigated areas of the Great Plains. It should also be noted that all the corn being marketed as drought tolerant was brought to fruition by conventional breeding techniques except for Monsanto’s. Theirs will be the one genetically modified version.

Farmers make plans on how to plant and manage their crops several months before actual fieldwork begins.  In the end we all understand that weather will be the ultimate factor in determining the success of those plans. In agriculture there are countless variables in play when managing a crop, and the one thing you have no control over is the weather. It can rain too much or not enough. Temperatures may be great for crop growth, or they may be too hot or too cold. Farmers must do all they can to realize the potential of a seed, but nature will always dictate a large portion of yield.


So, do you still think Monsanto is an evil empire out for world domination? Why don’t we just leave it at a company like any other, trying to make money. Some people call this greedy, but the rest of us also spend most of our lives making money. So if you dislike (or hate) Monsanto, then maybe it’s time to encourage other bio-tech innovations to make seeds better, cheaper, or both, to offer to Brian and other farmers like him a better deal. (As Dr. Kevin Folta told me in my interview with a scientist, there are many seeds paid for with tax dollars sitting on shelves around the country that are better in several respects than what Monsanto has provided us. As long as they are shielded from competing against these seeds, farmers do have limited choices. You can read my interview with him here.) Competition and a dynamic marketplace is what gives consumers the most choice and power, and now, Monsanto pretty much stands alone having cornered a majority of the market. Much of their practices are rooted in this power and laws (not in the science and seeds), so let’s go about encouraging innovation and competition.

And if you are against the consumption of GMO foods, there is no need for it. There is already a label that tells you the exact same thing, ‘Certified Organic’ is another way to say “GMO free.” GMO food is in 80% of your supermarket, so it’s a safe bet that anything you see in the supermarket has a GM ingredient in it. There is no need to create ever more regulatory hoops to label GMO food, when the opposite label means the same thing. As for me, though I live for the moment in Europe where I can’t get GMO food, even if I wanted to, I’ll not shy away from it in my travels, it is my opinion that they are the future of food. (Note: I am not saying I think organic production is going away, or that everyone should eat GMO food because I said so; as long as there is a market, there will be self-interested people looking to make money by providing that product.)

Biotech seeds have been the fastest adopted agricultural technology in history. Pandora’s box has been opened, there is no closing it, only managing it, so let us manage it better, and that will only occur if farmers are convinced. So if you have issues, have them not with the science or technology, but the handful of controlling companies who are only responding to the incentives the market has provided them. Competition is needed, not an outright ban, which is probably impossible anyway. But, it is heartening to me, that family farmers are not disadvantaged by using what is available now. (I know that Monsanto has disadvantaged other family farmers, or just farmers, but this is not a bias against GM seeds, it is against the company, and it doesn’t mean they are out to screw everybody else as well. They act in their own interest as does any other company.) And as for the subject of chemicals that always comes up, let us put them in the proper context:

Every compound you can name, no matter how scary has a safe level; and every compound, no matter how natural, has a toxic level.” ~ Brian Dunning (Author)

Thanks Brian, for making food for the rest of us. We, or at the very least I, are grateful, and I trust that you know what you’re doing.

[UPDATE: Part 3 in the series: Lowdown on GMOs with a Biotech Firm can be read here.]

Q&A – The Lowdown on GMOs with a Scientist

Gm Food good

Last year, as those who’ve read the first edition of my book will know, I was anti-GMO. Why? Well, I thought I had the evidence on my ‘side’. But I can now honestly say it was because I had no idea what I was talking about. (Need further proof I’m an idiot?) My knowledge of the subject was inadequate; much of that knowledge I got from biased sources; and I’m sure there was some social conformity bias somewhere in there. (I’m sure there were many more biases; but honestly, listing my own biases is depressing. I’d rather much do it to others. That’s where the fun is at!) I’ve just released a 2nd edition of my book, Random Rationality, and that stance has been rectified.

In the meantime, I’ve delved into some of the literature and involved myself in a debate with friends on the nature of GMO on the safety issue. In doing that, I also reached out to Dr. Kevin Folta last week (his profile and academic history here, and check out his highly informative blog here) to confirm what I had learned, and find out why GMO’s are so misunderstood. Dr. Folta is a plant geneticist who works at the University of Florida. He’s a scientist who specializes in plant molecular biology and he was kind enough to share his thoughts with me on his area of expertise. Our exchange is below, you’ll find it brief, but extremely informative. I’ve bolded some of his statements, those that I consider important.


The Lowdown on GMOs with a Scientist

Fourat (Me) – What is the main thing (or is it general) about GMO’s that the public routinely confuse, or get wrong, when discussing and debating their impact?

Kevin Folta –  There are so many misconceptions. The first is a fundamental one, that being that there is a debate at all.  There is no debate among scientists in the discipline of plant molecular biology and crop science. Sure you can find someone here and there that disagrees, but there is no active debate in the literature driven by data. There are no hard reproducible data that indicate that transgenics are dangerous or more potentially dangerous than traditionally bred plant products.

If I had to nail down the most annoying misconceptions they would include that all scientists are just dupes of big multinational ag companies. Anyone that presents the consensus of scientific interpretation of the literature is immediately discounted as some corporate pawn. There’s nothing further from the truth. Most of us are hanging on by a thread in the days of dwinding federal, state and local support for research. The attacks on the credibility of good scientists hurts our chances to stay in academic labs and consider the cushy salaries and job security with the big ag corporate monstrosities we chose not to work for when we took jobs working for the public good. That’s pretty sad.

There is this allegation that we hide data or don’t publish work that is inconsistent with corporate desires. They need to get one thing straight. We’re not in the public sector because we are excited about listening to some corporate mandates. No thanks.  We’re here for scientific freedom and to discover the exceptions to the rules and define new paradigms.

If my lab had a slight hint that GMOs were dangerous, I’d do my best to repeat that study, get a collaborator to repeat it independently, and then publish the data on the covers of Science, Nature and every news outlet that would take it. It would rock the world. Showing that 70-some percent of our food was poisonous? That would be a HUGE story — we’re talking Nobel Prize and free Amy’s Organic Pot Pies for life! Finding the rule breakers is what we’re in it for, but to break rules takes massive, rigorous data. So far, we don’t even have a good thread of evidence to start with.

The other huge misconception is that you can “prove something is safe”. Nothing can be proven safe. We can only test a hypothesis and show no evidence of harm. You can’t test all variables — nobody could. We can ask if there is a plausible mechanism for harm. If there is, we can test it. If there isn’t, we can do broad survey studies. A scientist can search for evidence of harm — a scientist can never prove something is safe.

2 –  In what ways might GMO’s be most beneficial to our biosphere, and why might organic’s not be as good as to get us there?

Kevin Folta – There is no doubt that transgenic plants can be designed to limit pest damage with lower pesticide applications. That is well documented by the National Academies of Science, the best unbiased brains in our nation. Most data is for cotton and maize, and show substantial reductions (like 60%). Transgenic potatoes were amazingly successful in Romania until they joined the EU and had to go back to insecticide-intensive agriculture.  Even glyphosate resistance traits, for all of their drawbacks in creating new resistant weeds, replace toxic alternatives.

Conventional farming takes fuel, labor, fungicides, pesticides, nematicides and many other inputs. Water and fertilizer are in there too.  There are genes out there in the literature that address most of these issues. Scientists in academic labs discover these genes and define their function in lab-based GMOs that never are used outside the lab. The regulatory hoops are too difficult and expensive. Only the big companies can play in that space. Even little companies like Okanagan Specialty Fruits have to deal with the nonsense from those that hate the technology. Opposition to the science keeps the big guys in business, because nobody else can compete.

Who loses? The farmer, the consumer, the environment, the academic scientist and most of all the people around the world that don’t get enough food and nutrition. Who gains? Big ag.

3 – What do you consider the most important aspect of differentiating the good from the bad when it comes to considering science? i.e., what is the first thing you look for after reading a study

Kevin Folta – In the short-term I consider the system studied.  Was it an animal system or cells in a dish? Most of the anti-GMO work is done on cells, especially cell lines that sound scary (like ovary, testis or fetal cells) but have little relevance to the complexities of animal systems. If done in animals, was the experiment properly controlled? Do the researchers SHOW the controls (like they conveniently omitted from Seralini’s 2012 rat-cancer work in Figure 3). Many studies that look good compare a GMO to an unrelated plant type. It is just not a valid comparison. Plants produce toxins and allergens, so you need to test the same exact plant without the added gene. If they do the rest of this properly then they need to run sufficient numbers and use good, common statistics. If they do all of this the work is publishable after peer review and should go into a decent journal, not some low-impact journal that publishes incomplete work or work that oversteps the data.

A lot of junk escapes peer review. Reviewers and editors are overstressed and overburdened these days. We do the work as service for the field. Occasionally a paper slips by in a lower-impact journal. You’ll find most of the anti-GMO papers there.

Another important attribute of good work is demonstrating a mechanism. For instance, just don’t tell me that you found some evidence of GMO harming cells. Tell me how. How does it happen? If the phenomenon is real the mechanism should be dissected out in a year’s time.  Omics tools are incredibly sensitive and we can detect small differences in gene expression and metabolic profiles. If GMO harm was real, the authors would define that mechanism, then collect their Nobel Prize and Amy’s Pot Pies.

The ultimate test is reproducibility. You’ll see that the best “evidence” for harm from GMOs comes from obscure journals, aging references that were published and heavily refuted by the scientific community (Puztasi, Seralini, etc), and work that was never repeated by outside labs. These are flash-in-the-pan works that never are expanded beyond the seminal study. The best sign of real science, good science, in an edgy area is that it grows. You see more scientists pile on, more research, more funding and bigger ideas. Models expand, mechanisms grow.

That just does not happen in the anti-GMO literature. The same authors publish a paper and then it goes on the anti-GMO websites and gains attention — while it dies in the scientific literature with no follow-up.

4 – Is there any split in the scientific community as to the safety of GMOs? If so, where does the split lay?

Kevin Folta – There are splits in the scientific community like there are splits for climate change and evolution. You have scientists like NIH Director Francis Collins that support creationist leanings. You have a small set of meteorologists and atmosphere scientists that claim that climate change is not real. There’s always room for a dissenting opinion out there, but they usually don’t have good evidence, just belief.

The same is true in biology and plant science.  There are a few out there that let philosophy rule over evidence, but they are not at the edge of research. In the circles I work with there is consensus about the safety and efficacy of the technology. Even those that study organic and other low-input production systems support biotech as a way to do their jobs even better. That’s a strange relationship many don’t expect. You’ll not see anti-GMO writing from too many tenure-track scientists at leading universities.

There is confusion on this. The Union of Concerned Scientists is frequently used as evidence that scientists are against this technology. When you read who they are and what they do, they are activists. They don’t do research or publish in the area of biotech. There are also others that claim to be experts or exploit some tenuous university affiliation to gain credibility. They should be looked at as deceitful, but they are accepted and believed with great credibility. People like Mercola, Smith and others sure sound like they know what they are talking about but they are not experts. Even Benbrook, a guy with a great career and a wonderful CV, goes off the deep end on the topic.

Readers need to apply all of the filters we discussed here today.  What the data really say, who did the work, and if it was reproduced independently are the most important criteria in separating reality from fiction in the GMO topic.


If you stand for scientific integrity, and going where the facts take you, then please share this Q&A so it may reach a wider audience. Almost every factoid from the Anti-GMO crowd has been thoroughly refuted, debunked, and repudiated by the scientific community. Millions of lives depend on the future of our food production, that means they depend on scientific experimentation and information untainted by ideology. The science is settled, and has been for some time. And as Dr. Folta above, and others, have elucidated, the intense opposition to the GMO technology has only intensified Monsanto’s grip upon the market. Facebook it, tweet it, re-blog it, or Google Plus it. Give my blog credit, don’t give it credit; I don’t really care. Good science matters more than pageviews (though pageviews are still nice), and more scientists like Dr. Folta should have their voices heard instead of the fear-based, fake-facts groups out there shouting from the rooftops who don’t know the first thing about genomics, evolution, or reality. (If you enjoyed this article, you may enjoy my last one on science in general, read it here.)

Ready. Set. Share!

[UPDATE: Part 2 and 3 in this series; Lowdown on GMOs with a Family Farmer and Lowdown on GMOs with a Biotech Firm can be found here and here.]

Randomly Scienced

randomly scienced

Since a very young age, I’ve been fascinated with science (I first fell in love with cosmology). Every year since, my appreciation of science has grown – though my knowledge of it not considerably as much — one of my chief regrets. In this post, I want to lay out some random observations I have accumulated in watching the science vs dogma debate play out.

#1: Epistemic Dictatorship
One day I found a comment on my blog post in which a friend and I debated back and forth on God, the Meaning of Life etc. The commenter had asked what is life, that he could not imagine it as a meaningless pile of interacting chemicals, and wondering where consciousness could have arisen. Another commenter, after spending many days lambasting me on my knowledge of cosmology (he held philosophical opinion above observational cosmological evidence, so I should have ignored him but I foolishly didn’t); anyway, after such lambastation heaped upon me for wishing upon humanity an epistemic dictatorship, being scientistic, and ‘just another guy confusing cartesian bifurcation for reality,’ he responded to commenter #1, saying that since it was impossible for consciousness to arise by itself, it must have been created by a conscious agent. Circular reasoning at its finest.

#2: The Irony of Denying Evolution and Cap and Trade
In America, the religious right have fervently set it upon themselves to make war upon the theory of evolution for offending their presumed sensibilities, and also, against global warming, taking particular issue with cap n’ trade. Of course, the sweet jingle of irony never lingers far from those who hold facts at bay. They disdain cap-and-trade, because well, they are fixated upon the short-term profits from coal/oil/gas/shale, much to the detriment of the long-term health of the biosphere, and the rank and file Republicans, whom have been indoctrinated to tying their economic security to the elite factions of the party, have lapped it up hook, line, and sinker; that being it will irrevocably extinguish short-term economic growth (unable to see that other businesses and technologies will pick up the slack for long-term growth). Evolution at its finest; using shortsighted animal instincts to focus on what is here and now, with the security and safety of short-term profits, all the while ignoring, or keeping at bay the uncertainty of the future, i.e., they preferably express the lower-order thinking we have accumulated from our evolutionary ancestors, giving their neocortex a much unneeded vacation. (And though I do not wish to offend anyone unjustly, I can just find no other way to express it. This is not to say that only the religious right express such dimwitted sentiment, but they are, unfortunately, the most pernicious about it. To be fair, the left have their own share of madness; anti-nuke despite it being the safest form of power generation, and anti-GMO despite the fact that even organic food today is in some shape or form, genetically modified and all we are doing is replacing blind evolution with purposeful evolution – something necessary if the worst of climate change does occur and increasing desertification and seasonal rain find themselves obfuscating our attempts at growing food.)

#3: Anti-Scientists
Then we have the anti-science people (in a more general sense), whom look to science’s past to discredit its present. These invariably crop up in science vs religion debates – usually invoking Stalin’s and Pol Pot’s atheistic, materialistic agenda, Nazi eugenics, Soviet Lysenkoism, or upon matters of white racial superiority.

These arguments fail flat for several different reasons. Firstly, and as most secularists are aware; the ‘atheistic‘ regimes of Stalin and Pol Pot denounced religion on the surface, but in reality, simply replaced the God in religion with the State. It was merely religion in another form and speaks more so than other examples to the danger of religion than of atheism. (Besides, the new atheist movement is not about just being an atheist. In fact, that is the last thing it is about. It is about using reason and empirically sound and validated methodologies to improve the lot of everyone.)

Back to the charge however of anti-scientism, and to attack their proposition directly; they assume – one might say demand – science must have gone from 0 to 60 immediately (0 being the blind superstition of our ancestors, and 60 being scientifically where we find ourselves now), without first passing through 1 to 59. (As if the Pentateuch, New Testament, and Quran just fell from the sky in one piece, instead of being the accumulated baggage of earlier religions and cultures – and that first religion from which the others derived, whatever it might have been, it is reasonably safe to say, was based on ignorance of nature.) Yet, while many excuses are made for religions failing in the past and present (and let’s face it, future), they point to science as if it was a cohesive, secular, and centralized entity that popped out of nowhere, and unable to find many solid examples of its failing today, look to its ignorant past so they may continue their smear campaign. (I am not insinuating that science is perfect. Far from it; from publication bias, to reporting bias, to funding bias, to inefficiencies in the peer-review system, to taxpayer research thrown behind paywalls. Science has a lot to set straight, but, as is so often the case with science, one by one, they are slowly but surely being tackled and will eventually be overcome.)

To go through the charges one by one. There was no basis for Lysenkoism empirically, especially as established as natural selection was then, so while it may have hidden under the veneer of science; did not make it so. The soviet famines caused by such blind faith in Lamarckism was not exemplified by a scientific attitude, but unwarranted faith in an unscientific geneticist who put his faith before reality.

Now take racial superiority, which for thousands of years was coddled by the religious texts of the world. The churches instilled into the white, ignorant populations under their domain the required incentive to rationalize the subjugation of non-whites, and thus to the educated elite of their day seek meaning where there was none – this latter trait is basic human nature; all humanity suffer from its thorny thistles – to prove white superiority instead of deducing from first principles; namely, nature. (Scientists aren’t gods; they are subject to the same biases and agendas of power as were others. The word scientist didn’t even exist until 1833, so to speak of scientists before that is somewhat meaningless, they were just people with all their biases, shortcomings, and blind spots. For all of Newton’s genius, he was an alchemist, and Darwin set forth on the HMS Beagle to prove the truth of the Bible, and then almost didn’t publish his On The Origin Of Species for fear of backlash. And Galileo regarded the Bible as an alternate source of truth just as much as nature herself.)

There is also the further myth propagated into the European zeitgeist in that they were high, mighty, and superior to all others because they were the first to practice some proto-scientific methodology. Many religious people give credit (or take credit rather) for the church for harboring the scientific method and universities during the conflicts and plagues of Europe, which they indeed did, but they ignore the fact that it was only because the Saracens (Muslims as they arrogantly called them) had bought with them from the orient the translated works of Ptolemy, Aristotle, Plato, Euclid, Hippocrates, and the wisdom of the ancient Greeks, which they had translated, copied, incorporated, and spent 400 years theorizing and building upon with funding from the caliphs (who considered it their duty to learn more of the world, and so poured money into scholarship and the building of huge libraries compiling such great works of knowledge as the Booking of Healing and the Canon of Medicine, the latter being a million words long). In the process far surpassing the superstitious peasants subjugated to the feudalistic and petty lords making war upon another over in ‘high and mighty‘ Europe. Though eventually, this constant warring would prove beneficial as it did not allow the rot of stagnation to take hold and thus encouraged innovation in the machinery of war, productivity, and agriculture – but which only took hold after the Muslims had bought all their knowledge and shared it freely. By the sheer dumb luck of being so ignorant that war was inevitable were the conditions so fortuitous, and thus paved the way, for the enlightenment; not forgetting the Muslims bringing with them the translated knowledge of the ancients, as well as their own formidable knowledge-bank. During the end of the 12th century, the scientific decline of the Islamic empire began as they began pursuing spirituality as opposed to science or knowledge for its own sake – such was also the case with China. It is only very recent that knowledge has begun being pursued for its own sake on a large-scale.

To attempt to taint science’s past – which is much younger than many people think) – to discredit its present is akin to watching a 12-month old baby take its first steps, watch it fall down several times, then tell it stop trying for fear of further failure and telling the young chap that crawling is a superior method of transportation (read: truth). Then, once the cute little baby figures out how to walk on its own and starts running and then jumping, they continuously point to those first few steps as prove that the baby started failing first, therefore every step it takes is to be looked upon as suspicious, and not proof that walking/running/jumping is superior to crawling. This, in a nutshell, is what people mean when they say science is an epistemic dictatorship, or refer to its practitioners as scientistic, and bladdy blah blah >>insert meaningless insult here<<.

Where the mindset comes from that demands reality conform to our subjectivity instead of the other way around, I will never understand. Never will I ever. And some of these people have the balls to call scientists arrogant for wanting to know the way the world really works…

Mind-Reader

mind reader

Neuroscience is one of the sciences most feeling the exponential progress of technology. With the invention of the fMRI machine, we can peer into the brains of people (and presumably animals). Each year, the tools and techniques we use to probe into the brain are doubling in their precision, finesse, and resolution (i.e., we can resolve more and more detail in less and less time), until eventually, some say between 2030-2040, we will be able to see all 100 billion neurons and their 100 trillion intra-neuronal connections firing in real-time in the human brain. As these technologies, and several others, increase our quantitive understanding of the brain, we have other technologies increasing our qualitative understanding, i.e., learning to decipher the organized chaos of the mind.

Scientists can mind-read words that a patient reads silently (note: this cannot be used yet to read what you’re thinking but only match up what your reading). And scientists have figured out a way to reconstruct movie clips that people were watching from their mind; as well as reconstruct the voices in other subjects heads. Laying the groundwork for mind-reading far in the future. (Though I do hope that Moore’s Law doesn’t allow those devices to become portable, though conceivably, even if they do, technology will be invented to keep out eavesdroppers–Norton BrainSafe? On special for only $999.99. In fact, just yesterday, an app named Silent Circle became available for iPhone and iPad that creates uncrackable peer-to-peer networks to call, message, and send files. [The app must still pass an independent security test which it will do soon, so grain of salt])

But I don’t want to get bogged down in technical jargon and scientific details. If you want to go in-depth on such subjects; chapter-four of Kurzweil’s ‘The Singularity Is Near’ is a well-to-do primer. (I imagine his new book, How To Create A Mind, will explore chapter four in even greater detail, but I haven’t read it yet.)

What I do want to explore are the things we might do with such technology once it becomes cheaper and more capable in the coming years. (We won’t have to wait until 2030 to fully take advantage of it, but it will take that long perhaps for the advancements of the brain-deciphering mentioned above.) I’d love to see this tech trained on animals. Just think of what we’d learn. We know that dolphins have a language; they have syntax and grammar, have been known to outsmart humans, and even introduce themselves to newly met dolphins. In reading Carl Sagan’s (amazing) book, Pale Blue Dot, he mentions that in flying to space, we discovered the Earth. It might be said, in talking with the first species, we will have discovered our humanity.

What will we learn talking to a chimp? Or an ape? Or our dogs and cats? Who wouldn’t want to know the width and breadth of their thoughts? How they think, why they think; do they have a capacity of choice, and if so (a safe assumption to make), how much capacity?

The story of civilisation is that of our increasing circle of compassion. That is, as our technology advanced, we became likely to view others as sub-human, and began viewing them –  properly no less – as equal, thereby laying the groundwork for new moral truths, and thus, more moral societies. We are moving beyond our evolutionarily endowed tribal mentality. (Though we are not yet out of the woods but we are, oh so close.) It only seems logical to extrapolate that this circle of compassion will expand, and indeed has already, to the denizens of the entire animal kingdom. Perhaps, on that day, resistance to the theory of evolution will stop? (Though that may be wishful thinking on my part.)

What animal would you want to talk to first? And why? I’m all for the dolphin, but let me know in the comments below.

The Three Choices of Creation

creation

Out there on the interwebs, there is a war going on for the soul of something, and it is known as the fine-tuning argument. It’s essentially an argument that stipulates that the constants that govern the Universe as we know it were fine-tuned by an external creator to allow intelligent life—as are we—to flourish. If any one of these constants were changed just a smidgen, then life (as we know it) couldn’t exist.

There are three ways to look at this:

1. There is an external designer (God) who fine-tuned these constants to allow for our existence
2. It was sheer, blind luck that our universe had these constants and not some others—assuming that is, they could be anything else
3. These values are what they are because we live in a multiverse in which all possible values of the constants are instantiated and we find ourselves here simply because here is one of the few places we can be (the weak antropic principle)

(1) is obviously what the religiously inclined would choose. (3) is what many scientists and the scientifically inclined would choose, though of course, not all. There are many debates and discussions out there taking sides, giving evidence and reasoning for this and that, but I see very few people discussing (2). So I want to get some skin into the game, but with a different angle—I’m sure there are others out there who do see it this way. I just haven’t found them. The internet is a big place, or so I’ve been told.

Lets assume (1) to be true. How could we ever rule out (2)? The answer is, as far as I can tell, you cannot. Sure, we could say if we received a sign, we’d be sure: one night, we see the stars rearrange to spell the words “I Am That I Am”, or a book is beamed down from the heavens that explicitly details the spookiness of quantum entanglement or some such equally advanced knowledge we have not arrived at. But then, we couldn’t rule out an advanced alien species playing a practical joke on us, or giving us advanced physics we are not yet aware of, so we’re back at square one. There is no way to definitively rule out chance—or aliens. Theologians often make this statement against scientists (they call it scientism), insinuating that we can never be sure of what our scientific theories tell us, and in the next breath invoking God (the irony is lost on them).

Now, lets assume (3) to be true. How do you rule out (2)? The answer is, you don’t need to. It is part and parcel of the same package. Essentially both are down to chance. We might not might be able to say definitively we live in a singular universe or multiverse—although there are ways we might get observational proof of a multiverse. But we should be able to say with confidence that (1) could, in principle, be ruled out definitively and either (2) or (3) be true without fear of going awry.

One way or the other, something has to be infinite and eternal into the past. Either God or some other entity for (1) to hold true or the universe/multiverse of (2)/(3). The latter two have one less assumption (being that the wider reality of which we may be a part of has no complicated attributes; such as intelligence, creativity, and/or emotion that God is seemingly endowed with. We don’t need to explain why that wider reality, if indeed it exists, is simple, non-material, and non-sentient, though we would if a God was involved). Anyone else have an opinion? To me, this seems too easy. I feel like I’m missing something.

RE: A Terse Explanation for the Finite Nature of Religion

religious reasoning

Both Heathen Heart and R.L. Culpeper have written a few posts between themselves discussing, and respectfully disagreeing on the endgame of religion. So now I’m turning it into a chain-mail of posts by adding my two cents (and that’s probably all it’s worth) in response to Culpeper’s post, linked here. It’s written as a comment, but I’m adding it here because I needed to insert links as references  and it’s also quite long (for a comment at least).

So you’ve made some great points in your post, and I’m inclined to agree with all of them. However, and forgive me for being blunt, I think they are rooted in the application of your considerable intellect only to the short-history humanity has had. The assumptions (or fundamentals) that have thus far, underwritten our societies, are changing and will soon no longer be relevant. To elucidate this, let me use an example of a friend who took a similar position but related to GMO foods.

She said that science (read: genetic engineering) has never produced a healthier food than what we can produce organically. In this, she is not wrong. But what was also implied was scientists will never ever produce a healthier food than nature, and this is false (if we set our minds to it, we’ll do it; history is replete with such examples: flight, telepathy (cellphone), space travel, breaking the sound barrier, and so on). Producing a healthier apple than nature merely requires the requisite knowledge and tools, both of which are coming online ever increasingly in abundance with each passing year. It’s just a matter of time, because if nature can do it, it means that it’s possible, and since evolution never produces perfect organisms, there is always a better way to make it. Ergo, one day, provided that research into GM food continues, then GM food will one day trump nature’s food.

So to relate that back to your example. Religion will never release its hold upon humanity. I’d like to modify your statement if I may. I think it should be written as “religion will never release its hold upon humanity while people remain uneducated, mis-educated, disease-prone, conflict-prone, and death providing the existential threat.”

So let’s tackle them one by one.

Global literacy is on its way down, thanks to the Internet, cell-phones, and increasing wealth (this trend is slow but progressing. Global literacy is 84%, while in 1990, it was 76%). Mis-education is a problem, but again, this is also getting better and you need only look to the western countries to see that as economic growth increases, societal dysfunction goes down, more kids are sent to school as a result, instead of having to help the family get food and income, and religious fervour drops as a result. (There was a recent comprehensive study that showed that religion, social stratification, and societal dysfunction are inherently linked, but which causes which is as yet unknown. Does society-wide religion cause economic inequalities, or does economic inequality increase religious fervor?  I think it’s the latter, but there is no way to conclusively show it is one over the other.) This somewhat tackles mis-education indirectly. A prosperous society is more likely to be a freer society. And a freer society is more likely to have criticism, debates, discussions, opposing and dissenting opinions, and this makes its way into the hearts and minds of its citizens.

Disease-prone: This is somewhat self-explanatory. 100 years ago, life expectancy was 47 years. It’s 78.5 today in the west, 89 in Monaco, and 83 in Japan. Chad has the lowest at 48.69, but that is higher than the entire global average of one century ago. More and more diseases are being combatted now (Hans Rosling has an excellent four-minute video of the rising life expectancy as a result of increasing wealth). But medicine, up until now, has been a hit and miss process. As Kurzweil says, we just found stuff that worked and kept doing it with very little understanding of the underlying biological processes at work. With genetic medicine increasing in cost-to-performance ratio ten-fold per year (5 times the pace of Moore’s Law in computers), it is getting cheaper to sequence DNA, understand the information processes that underly biology, and start implementing preventive medicine instead of reactive medicine, which is resulting in Lab on a Chip technology. (Soon, your cellphone will become your doctor and analyze your body on the spot. Pandemics will cease, health will increase, people will have more time to satisfy their own desires and study, and quality of life will increase. This tech is coming online this year. I wrote a post on the future of medicine and health here, and here is a short youtube video to show it in action.) Historically, life expectancy has increased 1-2 years per decade. But because biology is now an information technology, it will (and indeed does) increase exponentially (since 2003 when the genome was mapped), and within 10-20 years, life expectancy will be increased at one year per year. (Note, this requires no new technology, only the technology and understanding we currently have to continue along at a pace equal to, or greater, than Moore’s Law, and this is indeed happening and shows no sign of abating.)

Tackling conflict. According to Steven Pinker (everyone owes it to themselves to watch his 18-min TED talk titled: The Myth of Violence), violence has declined since the Industrial Revolution. In fact, the 20th century was the most peaceful century in existence, even accounting for WW1 and WW2. War is becoming less and less common the more the information about the conflict travels. We need only look at Vietnam here. The first war to bring the reality of death and destruction back to the general population. Needless to say, it was the most unpopular war in history, and look at the conflicts since then, unwaveringly smaller, and more sensitive to collateral damage. (I am not saying it has been roses and happiness since then, but there is a clear downgrade in the severity of conflicts in regions of the world where communication and information are abundant.)

Death is the big one and will undoubtedly remain the biggest motivator, but we must realize that even if no progress is made, progress against religion can be made. Just look at the Scandinavian countries, Australia, several other European nations, China, and Japan which are majority (or close to) agnostic/atheist. But be that as it may, progress towards the dissolution of death is well underway, and even starting to appear in the mainstream press. But for now, we must take it as an assumption that death will be forever removed as the inevitable curse it is. The other examples I have shown are in progress, so is death, but until global death rates hit zero (natural deaths, that is), the jury will be out.

You also mention political and economic inequality. I could write thousands of words on this, but to try to keep it brief. Technology is changing the human landscape and bringing people out of poverty. The book Abundance is a great read to really understand the dynamics. (And Rational Optimist so I’m told, though I haven’t read it yet but I will soon.) But, in the last century: per-capita average income has tripled (adjusted for inflation), food has come down in cost a factor of 10, shelter a factor of 20, transportation a factor of 100, and communication a 1000-fold. And in the last forty years, global poverty has halved while the population has doubled. So we are earning three times more, spending less on the necessities and learning/enjoying more than ever. These trends are actually accelerating (The Law of Accelerating Returns). While we are not out of the woods yet, the trends are clearly in one direction, and short of some calamity, should continue.

Concordantly, global religiosity is on its way down (59% are now religious, 23% are now a-religious, and 13% are atheists, with the none’s being the fastest growing, with the youth leading the way). (Who ever said young people were useless? It is only they that do reliably change the world. Of course, the logical conclusion is that if death is kept at bay, might things never change? The answer, for me, is no, as we tinker with our brains and augment our intelligence becoming in the process more wed to truth than to our cognitive biases as it stands now.)

So in answer to your questions. I do foresee a world of equal economic opportunity. (I think politics is obsolete and will go the way of the Dodo in the age of Big Data we are entering into. It’s even said that the metric system will run out of numbers to quantify the amount of data we will have by 2020.) Equal opportunity for education? Yes, Massively Online Open Courses (MOOC) are ballooning in size. Needing only internet connections to take courses at MIT and Stanford, as well as whole new schools opening up such as udacity.com and coursera that offer the information and teaching content of degrees, and they are starting to become recognized by universities and applicable for course credit. (It’s early days yet, but the trends are there and heading in the right direction. Soon, only an internet connection will be required. Two billion people have internet today; by 2020, it will be five billion, and soon thereafter, close to everyone.) A time when people will want to learn? This one is harder to be so confident on, but my gut realization is yes, and allow me to explain my gut (and subjective) reasoning for such an answer. The more I learn, the more I want to learn. I’m not content in not knowing, and though I have always been like this, I often never had the leisure or time or requisite knowledge to go out there and gather more knowledge. I get better at this every year, and continuously want to continue. Now, with a sample size of just one, I cannot confidently extrapolate this out to anyone else (though I’m sure I can to you), but I do think this is part of human nature; this innate curiosity. It requires we adequately provide for one’s basic needs, then education and wants, then the potential for self-actualization (Nietzsche’s will to power: superman). As we move forward into the future, we are becoming smarter (and the lag-time between the have’s and have-nots is halving every decade [Source: The Singularity is Near]), so it is only a matter of time before inequality becomes insignificant. Here, I’ll use the world ‘believe’. I believe that once the needs of most people have been provided, and they have been educated properly, and become more prosperous, religiosity will decline, and people will want to know more, and thus wed themselves to truth. Big Data will also elucidate the many mysterious workings of the Earth and our societies, as well as making it accessible to the public.

I recently read an article on the explosion of Big Data and the death of the theorist. Historically, when we wanted to find out more about the world; we proposed a theory, computed the results, and went to gather data by experimentation/observation to confirm or falsify that theory. This process is reversing. We are now generating so much data; science and scientific studies, tweets, facebook, blogs and webpages, planes, trains, and automobiles along with everything else that our computers programs can find, and pull out the theories and do science after the fact. This is great for two reasons. Firstly, less and less will get missed as a result. Before, if somebody wasn’t thinking about or trying to find out something, then the theory was missed, lost forever, or delayed, or when found often suppressed (we lost the knowledge to make aluminum for 1800 years because of Tiberius if I recall correctly, the Emperor of Rome). Now, with an army of AI’s whose sole job it is to pull it out the world’s information, we will learn that much more about the world. Pandemics will be a thing a past, resource depletion will be foreseen well in advance, known troublemakers will be spotted beforehand and terrorist attacks possibly stopped. (If you read the article, which I recommend you do, you will see that Bin Laden’s presence in Abbottabad could have been derived from publicly available information on the internet before hand to within 200km. Imagine the possibilities of stopping future attacks instead, which should do away with the politics of fear, and perhaps, even the CIA and military industrial complex.)

So, I think the future is bright (provided we can move fast enough on climate change and other vexing problems of urgent immediacy), and we can do away with religion, or at least, and perhaps more likely, relegate it to irrelevance, much as flat-earthism is today. There are also other interesting aspects which I do not have time to explore; such as the merging of humanity into a global mind, the technological potential of a universal fact checker (I recently had an idea to create a script that scours what you read on the net and highlights dubious/false claims. We don’t all have time to fact-check every claim we read, we are modified skeptics in that regard, but this is what we use technology for, to alleviate our shortcomings. Kind of like a modified Watson who will soon start informing and helping doctors in their diagnosis’ because the amount of info is expanding exponentially and a doctor can’t hold all that info in his head, so we’ll be using AI to augment their powers of diagnosis, and I see no reason why it will stop at just medicine. It will subsume all fields where knowledge is definitively known, and most likely provide probabilistic answers for other fields). But, I’m in a rush so I’m skimming. (If you watch any YouTube lecture by Kurzweil in the range of 45-60 minutess, you will immediately see where I’m coming from and I recommend that.)

Anyway, I don’t disagree with anything you said. In fact, I learn lots every time I read one of your posts. It’s only that the dynamics of our society, which still allow religious belief to be insulated from facts, truth, reason, and humanism are finite, and now that we are above the knee of the exponential curve, greater change will occur in ever-decreasing amounts of time. Lastly, I do not mean to make it seem so easy or underplay the consequences of any conflicts, local or global, of humanity. Merely, that it is becoming easier to understand, communicate, and tackle them, and this trend is becoming ever more pervasive, understood, and the means of production ever cheaper democratizing them in the process. There is a lot of work still to be done, a lot of people still needlessly die, and many more are unable to enjoy the comforts that many of us now enjoy. However, these problems are being more and more understood, tackled, and it will only become easier in the future.

This is, believe it or not, brief, and I have only explored them rather inadequately and quickly. But I’d love to hear what you think, so feel free to write a counter-post; disagreeing or agreeing for whatever reason, and if need be, I can explain in more detail, any point I’ve inadequately expressed. Looking forward to hearing from you.

Resetting Time and Redefining Human History

gregorian calendar reset

Dear secular friends, it’s time to change Time.

Reset the Gregorian calendar free of Christian references

Let’s be brutally honest: A.D 2013 is not only an entirely meaningless date to 6 out of 7 people on the planet, it is also a demonstrably erroneous one. The Common Era did not begin 2013 years ago. Nothing in fact took place in or around this period to mark even some minor shift in human civilisation, let alone a paradigmatic event worthy of partitioning epochs. 1B.C (Before Christ) and1A.D (Anno Domini: In the year of the Lord) are hollow markers and we are petitioning Sir Paul Nurse of the Royal Society to open a global debate on resetting the Gregorian calendar free of these religious waypoints. It is our express objective that science, not Christian imagination, be the rightful commencement date of the Common Era in a truly representative, international, secular calendar.

 Why Sir Paul Nurse?

261 years ago members of the Royal Society made a frightful error which we believe the current president, Sir Paul, has a duty to now help set right. At the stroke of midnight on Wednesday the 2nd of September, 1752, the Governing Council of the academy adopted the Gregorian calendar for the British Empire, and through that the world at large. It was and remains a measure of time unquestionably superior to the Julian calendar, albeit with one catastrophic, obnoxious flaw: 1A.D does not, in any way, represent the dawn of the Common Era.

What is the proposition?

To better honour our species, to better express the finer elements of who we are and what we’re capable of, to better represent our innate curiosity and drive to improve the societies we build, science, not Christian imagination, should mark the commencement of the Common Era. Through this online petition we at Reset the Calendar are urging Sir Paul to open a global debate calling upon experts from such diverse fields as palaeontology, anthropology, archaeology, astronomy, linguistics, art, mathematics and even philosophy to canvass human history in a way never before attempted and locate that point in time which will stand as the new date for the commencement of the Common Era.

What is an alternative date?

As a suggestion only, the inscribed 12,000 to 15,000 year old Thaïs bone might be considered a strong contender for this new date. Credited by UNESCO as “the most complex and elaborate time-factored sequence currently known within the corpus of Palaeolithic mobile art” the Thaïs bone is evidence someone (a nameless ancestor of yours and mine) was looking up and over a 3½ year period systematically wrestling some order from the celestial chaos passing overhead. Alone it is an astonishing moment in human history, a planted flag heralding the beginning of the end of 1.5 million years of natural anarchy and the first stirrings of scientific order. It is a moment manifestly more deserving of celebration than the essentially meaningless 1B.C/1A.D, and although just a suggestion it would mean this year not in fact 2013, rather 15013.

Click here to sign the petition.

For campaign updates join us at the Reset Facebook page or contact us directly at reset.the.calendar@outlook.com

Future of Work

work future

This is the last chapter of my book. To those who have read this far, I am forever grateful. (If anyone wants to read the Introduction and Conclusion, just leave me a comment and I’ll email it to you. For now, I won’t be posting it online.)

Sub-chapter #20, of Chapter #5, Technology, of my ongoing rewrite and open editing process Random Rationality: A Rational Guide to an Irrational World. I would greatly appreciate any feedback, corrections, criticisms, and comments. If you want the full PDF of the book, then you can download it by clicking here—if you provide constructive criticisms in return, and live in the US, UK, or EU, then I’ll ship you a paperback copy of the book free of charge when it’s published. If you wish to read the previous chapters in one convenient place online, please follow this link, and lastly, thanks for reading!


 

A FUTURE OF WORK

Last but not least, what might become of our jobs? If we play our cards right, one day in the near, or far, future, jobs—as we know them today—will become obsolete. Let’s find out why, and why this will be a good thing, perhaps the best thing to ever happen to humanity.

We are partway through a trend that once concluded, will result in a new renaissance (last time, I promise). An event that will be remembered for all time as the defining point when the potential of our creativity was unbounded by the limits of society and a new global culture was born.

First off, a bit of history. For all of humanity’s existence, we’ve had to work to survive, just as all other animals do. Whether that meant hunting for food, tending to crops, trading for goods, foods, or gold, and so on until we find ourselves working the 9-to-5 in the here and now—well, the lucky amongst us. By the way, this is how work will change. It will move from becoming a necessity to a leisure.

During this epoch, a trend has slowly, quietly and unnoticed, unfurled in the background: the ratio of man-hours relative to productivity or work done. From the start of civilization until the Industrial Revolution, a span just shy of some seven-thousand years (depending on which history book you read), this ratio has stayed fairly constant. That is, the amount of man-hours vs. work accomplished didn’t deviate far from the historical norm.

Of course, civilization still prospered in some cases and progress was evident. This progress, while not increasing the work done per person, increased the quantity of workers in a concentrated area, often resulting in slavery, the moral black mark on our history, and all those extra hands were able to carry out those gigantic tasks, such as building Rome, Washington DC, and other such cities of antiquity. Though contrary to popular belief, the pyramids of Egypt were not built by slaves, but paid Egyptian laborers.

When the Industrial Revolution kicked off in the mid-to-late nineteenth century, this ratio started positively increasing. That is, the same amount of man-hours constituted increased work, otherwise known as Productivity Growth (PG). This was due to the machines and industrial processes created: steam engines, coal plants, light bulbs, medicines, and factories that became extensions of our hands and minds allowing us to work smarter, travel farther, more productively, and in better health.

This trend is responsible for almost everything we have today. Technology started replacing human labor and this trend has continued to this day, allowing us to have that little thing we call comfort, and this trend, unhindered, will continue to progress further and exponentially faster with time as it has been since it began. None of the tragedies of the 20th century even put a dent in exponential increase of computational progress—that includes WW1, WW2, The Great Depression, and others.

We went from manual labor farming to horse-drawn ploughs to tractors, to automatic irrigation and soon to underground farming. From hauling stone slabs on sleds, to the wheel, to the horse-drawn cart, to the electrical car, to the internal combustion engine, and hopefully back to the electric car soon. I know what you’re thinking, yes the electric car was invented first and these are just a few examples among many thousands.

This positive increase, or negative depending on your viewpoint (either short or long-term), which depends on the type of job you have, has an ugly consequence. People have been losing their jobs for the last 150 years as machines replaced their profession; from the elevator man to the soot-shoveler, to the autoworker to many, many others.

Though so far, there has been a technological caveat. As society has progressed, new jobs have been created, continuing economic expansion. However, this trend of new jobs replacing old jobs is beginning to stutter. In 1993, there were 194 million Americans in the labor force, and by 2000, this number had increased to 213 million. During these eight years, 22.7 million jobs were added along with the 19 million new workers leaving a surplus of 3.7 million jobs. Between 2001 and 2008, labour participation went from 215 million to 234 million people, but with only two million jobs added in that same time period. A deficit of 13.7 million jobs, and since 2008, we have lost just over half-a-million more jobs (4.317 million lost vs. 3.765 million regained in mid-2012). So the total deficit is 14.25 million jobs, and this is just in nineteen-years.

Every month, the labor force expands by approximately 125,000 people due to population growth, so that’s 125,000 new jobs that the economy needs to add, just to keep the unemployment rate steady. By 2050, the labour force is projected to be 45% larger than today, or approximately 339 million people. That’s more than 100 million new jobs that need to be added by then, just in the USA. In the rest of the world, the population is projected to increase by at least two-billion, and perhaps three-billion according to UN projections. Where are the jobs going to come from? From nowhere it seems.

Counter to the population increase, the technology we are creating (and which shows no sign of stopping but increasing) is only getting exponentially better, smaller, and smarter to the point where it will literally be able to out-think and out-flex us. This shift, this realignment, this relentless progression of automation will continue until the only thing left for the human mind to do will be to wonder, imagine, and explore the Universe—which also happens to be the things that we are best at. Eating, drinking, and sex not withstanding!

[Carl] Bass points out that we are now at a great inflection point in the automation of labor. Extraordinary breakthroughs in the areas of artificial intelligence, robotics, and digital manufacturing are all converging upon one another yielding a world full of technologies plucked right from the world of science fiction.” [Emphasis mine] ~ Aaron Frank (Writer)

We are going through an epoch unseen before in human history. We are in the midst of transitioning from a manual-labor society to a knowledge-generating, machine-operated society. We‘re currently in the transition period, because as is plainly obvious, we still have billions of people working, though many of them struggling to scratch a living out what they are given, or able to take. But the underlying trend is undeniable.

But there are those who wish to roll back the dial, or want to stop the buck here creating a static society. Of course, being oblivious to the fact that every static society has collapsed, because problems invariably crop up and a static society cannot hope to innovate their way out of them. The American economist Robert Solow earned a Nobel Prize for showing that economic growth does not come from people working harder, I.e, just working longer hours, but from working smarter. By getting more from less, and in the process freeing up time to do other things impossible beforehand. Stopping or slowing technological growth, and implementing employment for employments sake is a straight path to disaster, reminiscent of 20th century communism.

Back to basics. The reasons for the increasing mechanization in society are simple. It costs much less to have a machine do a person’s work than a person, especially with the increasing cost of labor, and companies having to contend with trillions of new currency units floating around the world and doing everything in their power to not raise their prices, so they decrease costs. Machines have no health insurance bills, don’t get sick, need vacation days, smoke breaks, and aren’t distracted by their inner monologue, along with various other factors that retard productivity. These are all ancillary reasons, however. Many of the background processes of our world today can only be done by machines and  artificial intelligence, such as aviation, computer science, heavy industry, and even in finance.

While the main rationale often used to replace a person with a machine is to improve a company’s profit margin and time to market, and not the automation of society, does not make the result of these decisions any less real (or inevitable).

In the past, as people have become displaced from one profession, they have moved to other professions that could not be automated or that were created due to new technologies invented.

In the twentieth century, as manufacturing jobs were becoming mechanized, factory workers moved en masse into the services sector. For the last fifty-odd-years, the services sector has exploded, most notably in the USA, but also in much of the developed world, be it the restaurant industry, or the financial services world. The services sector is now beginning to bloat, and it simply cannot absorb the mass numbers anymore. Parallel to this, the wheels seem to be coming off the major world economies, and fourteen-million jobs have been lost in the last eleven years alone in the USA, putting an extra squeeze on companies who now see automation as a way to reduce costs and improve their profit margins.

Foxconn, manufacturer of Apple’s iPads and iPhones, are planning on introducing one million robots to replace 100,000 workers in the next three years. The irony in this is that as more and more people are laid off and replaced by machines, the fewer products the company can sell in the long run. For a period of time, the company might improve its profit margins, as the rest of society hasn’t yet succumbed to this transitionary period, but this can only be temporary in nature.

As more and more of society’s jobs are automated—and it will happen one way or the other, for the consequences will be worse than allowing it but I’ll get to that soon—has the effect of removing the employees as consumers from the market. In a free market, employees, consumers, and employers are interchangeable; they are all one and the same. These former employees will no longer have the earnings to buy these increasingly mechanized products or services. Thus, we (theoretically) will reach a point where we can produce almost everything via automation, but there will be no one to buy the products (of course, we’ll never actually get there, as something will give beforehand).

What is going to happen to the millions of factory workers when 3D printing becomes affordable, fully capable, and factories a twentieth century relic? To miners when nanotechnology is economical and we can turn any material into anything else, and build anything we dream of? To farmers when we start growing our food; fruits, vegetables, and IVM underground in luminescent rooms, allowing it to grow at a fraction of the time needed above ground, not to mention land owners (40% of the arable land in the world is used for farming or meat consumption, which will become  essentially valueless), and then to the pesticide companies we’ll have no more use of, as food production now moved underground is out of the reach of insects? Not to mention the transportation companies that ship foods to market, and the factories that wrap and prepare the food?

These are all questions we need to be answering now instead of when the time comes. Otherwise, we’ll do what we always do when we come to something different; we’ll try to destroy it or vote into office, goldfish who want to destroy it for political gain. We aren’t exactly the brightest bunch when it comes to making decisions with our guts, instead of our brains, which is why history so often rhymes. I don’t think that any society could stop it or destroy this trend, even if it tried. If America were to outlaw technological progression, after a little while, the Chinese would be so far ahead that the American people would get shaky feet living under the yoke of a seemingly ever-increasing Godlike country on the other side of the world marching forward. Bullet trains, towering skyscrapers (they’ll be building the worlds tallest tower: almost 3000-feet, in ninety-days around the end of 2012, moon base, space station, electric cars and the list will go on). Short of full-scale nuclear war, or a worldwide dictatorship, the inexorable march of technological progress will continue. However, politics will stand in the way, and that may be a difference of maybe years, or a decade, between the society that was, and the society that will be. In the society that will be, where disease, cancer, and death are all history, a delay of even a few years could mean millions of people who should have lived but came up short. Consider for example, the controversy met with Golden Rice by anti-GMO activists and environmentalists around the world. Golden rice is a strain of rice modified to carry vitamin A (Beta-Carotene). A lack of vitamin A is estimated to kill one to two million people per year, of which 670,000 are children, as well as producing 500,000 cases of blindness, where one cup of golden-rice is enough to supply them enough vitamin A. Rice leaves naturally produce vitamin A due to photosynthesis, but the endosperm (edible part) does not, so scientists transferred two genes to make it do so. The new breed of rice had scientific tests performed, and was found that the vitamin A absorption was as good, or better, than other forms of the supplement. But anti-GMO activists successfully stopped its adoption and distribution to the parts of the world where it would have saved millions of lives per year! Think of the absurdity and stupidity of such a position. We were willing to put two modified genes inside a strain of rice, before the lives of millions of people per year, every year, until the situation is remedied because of some idealistic, bombastic, and shortsighted view of nature. Again, as we saw in the chapter, Future of Food, almost all our food today has been upended from natural selection as it is; it has been shot with radiation, hand-selected for breeding, and saved from extinction because of human intervention. The very process of planting crops is a slap in the face of mother nature, but no one is protesting farms, just the future of food, which they do not understand. And after all this, genetic engineering has not been stopped, nor can it, but the lives of those poor souls were indeed wasted. This is the inherent danger in rolling back or just delaying the wheels of progress; accidental genocide. There are many people who advocate the relinquishment of technological progress (as if such a thing were possible anyway).

The costs of many services, products, and food will continue dropping until one day they hit zero in terms of human energy input, and shortly after, almost zero from a material perspective. Once we are  at that point, we will have a choice to make, the biggest choice any society of humans has ever had to make, and with consequences that will span centuries and affect billions of human lives.

We can transition to a resource-based economy, where people are simply given everything they need or want at no cost since it doesn’t cost anything to produce from a labor standpoint, and with very little energy due to Moore’s Law of energy use—as computers increase in power, doubling every 18 months while halving in size and staying at the same price, the amount of energy consumed by them is going in the opposite direction, e.g., if the 2011 MacBook Air, had the efficiency of a 1991 computer, it’s battery would last all of 2.5 seconds, instead of seven hours. The difference is algorithmic in nature: better, more efficient algorithms doing more work in fewer cycles. What will be the point in money if nothing costs anything?

Or, the elite, or whichever section of upper-society comes into their momentary hold of power, whom are narrowly short-sighted to their own benefit (and think they know better), much as the rest of us are to our own benefit (and think we know better), will invent some other form of currency and keep the charade going round and round, convincing us that it is a necessary function of society to have government and classes. Go watch the movie In Time and you will get an idea of what could pass. I don’t personally think this will happen, but the situation cannot be entirely ruled out in advance, especially given what we’ve fallen for in the past. Just think of the French Revolution, they threw out Louie, and installed Maximilien Robespierre, who gave the world his ‘reign of terror’. Then they threw him out too, and installed the power-hungry Napoleon.

In such a world, where scarcity is no longer a natural function of the world, economies built on scarcity will (or should) break down. The function of price is to assign value to a scarce product; the more expensive the price, the more scarce the product, either by way of overwhelming demand, scarce materials, or high cost of production. Aluminum used to be worth more than gold, even though 8.3% of the Earth’s crust is infused with its ore, but the means of production were amazingly expensive and energy intensive, until electrolysis came along. The sciences and continually improving technologies have been nibbling away at scarce materials and the means of production for the last hundred-fifty years, making once-scarce resources plentiful. It doesn’t matter whether it is food, metals, silicon, electricity, or anything else. You name it; it is more bountiful today than yesteryear (perhaps except human reason).

So when we have the technology to remove the human element and increase yield to such a degree as to remove all elements of scarcity, what purpose will the free market have? What purpose will private industrial property have? Or any (by this point outdated) technology that allows you to have sway over another persons right to life? The key technological trend that has accompanied our evolving society, is that technology is both a resource-liberating force, and a democratizing force du jour. When the gun was invented, the poor peasant suddenly had a way to thwart the armored knight harassing him. Gutenberg’s printing press broke the stranglehold the Catholic Church had established for itself for over a thousand years, and the fax machine broke the Soviet Union’s monoploy on information.

Much in the same way that the threat of violence is illegal in almost all cultures today, so it will be so with the means of production in the future. There will be no benefit for a man or woman to own a technology that holds sway over others save for the sake of power, which may very well come to be regarded as a mental disorder in the future: a disruption to societies balance that cannot and will not be tolerated for the inequality, fear, and violence that may spring forth from it.

Think about crime today, almost all of which is motivated in one way or another, by money. Either directly in the acts of stealing, drug turf wars, or actual wars between nations over resources. Or indirectly, through the emotional suffering inherent in unequal societies, and the stress, cortisol, and lost family time to name a few effects. What will happen to crime? Person-on-person violence is at an all-time low, the twentieth century was the most peaceful century of human history (accounting for both World Wars), as shown by Steven Pinker’s TED talk, The Myth of Violence, and there is no reason, given future projections and technological progression, that it won’t dive even further.

Technology is accelerating at an exponential rate and will continue in such a manner for as long as human co-operation continues. Our current forms of politics, governance, and society cannot, perhaps will not be able to transition into such a futuristic society. We need new ways of governing that don’t conflict with the fast-changing means of production that will start changing in increasingly smaller periods of time, with each cycle bringing with it greater change than the last (the Law of Accelerating Returns).

Transitions are painful, an unfortunate fact of life. Especially for local and linear oriented biological life as are we. Not to mention we don’t deal well with change, which is why we tend to end up in societal systems for far longer than we should, and why history repeats itself with dictators, tyrants, monarchies,  economic fantasies, and republics of the people who end up serving the state first, the people enough to placate, and war after war needlessly conducted to the detriment and distraction of said placated people. As remarks one of America’s literary genius’s, Mark Twain “It’s easier to fool people than to convince them that they have been fooled.” A sad fact of the human condition. However, this will be the first time in the history of civilization that we will truly have an alternative, an option not bound to the fallacies and falsities that are inherently created when millions of people converge on a society with their dreams, desires, ego’s, and jealousies. Once we arrive at that critical juncture, we will have the ability to free everyone from the confines of manual labor and mindless repetitive work and set people free.

We will be able to truly provide everyone on this Earth with life, liberty, and the pursuit of happiness, instead of having them as words on paper paraded through the wheels of time as if they actually meant something.

A common point made in response to such claims, is that people derive meaning and purpose from work. Assuming that in a world where mindless work is not done, people would sit about the couch all day watching television re-runs of an age gone (since apparently people will stop making media). But this is a shortsighted notion. For one, people today do all kinds of things without the incentive of a monetary reward. Wikipedia and Linux are just two visible examples of thousands of volunteers contributing millions of man-hours freely to building something of considerable value. Aside from those, people of all stripes and colors regularly and without want or need of reward regularly read and write books, gather knowledge, learn, collect trinkets and widgets, exercise their body and mind, create art and media, and contribute to many millions of activities and hobbies. In a world free of the unnecessary (and time-sucking) jobs of today, we would have far more energy and time to focus such activities, as well as with our family and friends, and on other efforts we truly enjoy. Lifelong learning may become the new universal occupation.

“The role of work will be to create knowledge of all kinds, from music and art to math and science. The role of play will be, well, to create knowledge, so there won’t be a clear distinction between work and play.” ~ Ray Kurzweil (Inventor)

There is a lot of unnecessary pain and suffering in this world today, and there probably will be more before this transition is over, and yet more still if we collectively make the wrong choice. Though the pain of this transition, if done right, will be infinitely less than the pain of stopping or rolling back the wheels of progress.

Money may very well be a thing of the past one day. Here is to the future, and to the people and technology that will abolish human suffering once and for all. We can only dream for now, but the future is fast upon us. Without knowledge, wisdom, and a steady resolve, we cannot push into the future for there will always be those holding us back, either for immediate personal gain or an irrational fear of the unknown.

“Our species needs, and deserves, a citizenry with minds wide awake and a basic understanding of how the world works.” ~ Carl Sagan (Astrophysicist)

Future of Tech

tech future

This is probably my favourite chapter. Here be sub-chapter #19, of Chapter #5, Technology, of my ongoing rewrite and open editing process Random Rationality: A Rational Guide to an Irrational World. I would greatly appreciate any feedback, corrections, criticisms, and comments. If you want the full PDF of the book, then you can download it by clicking here—if you provide constructive criticisms in return, and live in the US, UK, or EU, then I’ll ship you a paperback copy of the book free of charge when it’s published. If you wish to read the previous chapters in one convenient place online, please follow this link, and lastly, thanks for reading!


FUTURE OF TECH

 

The future is going to be very bright, brighter than a lot of us can imagine, though that is predicated on getting out-of-the-way of the engineers, scientists, and companies that will make it happen. (Not that we shouldn’t keep a watchful eye.) And if we do, the stars are the limit.

This chapter will focus on two emerging technologies that have the potential to bring about a beautiful future, and try as hard as I might, it will more than likely be an under-estimation because well… I’m dumb. You think I wrote this book? I was compelled to write it by something claiming to call itself free will, but I digress…for the last time…maybe.

 

3D Printing

3D printing has the potential to render the factory obsolete, and for very simple reasons; technology is beginning to move past economies of scale. Economies of scale refers to making so much of one product that the individual cost per unit is brought down by the mass quantities, which can be sold for a cheaper price, thus selling more quantities and increasing the likelihood of turning a profit.

A physical book makes a fine example (so long as I ignore print-on-demand). When a book is published, a certain number of books have to be printed, bound, distributed and subsequently sold to entail pricing it at say, thirty-dollars. Otherwise, the manufacturers’ and publisher take a loss. If that manufacturer is only printing a quantity that is one-quarter as large, the price results in a book that costs four-times as much, which makes recouping the initial investment increasingly difficult. Making more books allows each individual book to be sold cheaper and therefore increases the chances of recouping the investment, turning a profit, keeping people in work, and, in at least this case, increasing overall knowledge.

With eBooks, there is no such restriction on the cost per unit of the product as it is digital, and there is no difference between having one copy or one million copies. It is a simple command between the two quantities. An eBook has become a digital information technology. This is happening to objects. Physical objects are becoming (slowly for now, but increasing in speed) a digital information technology.

Today, every Jane and her Joe has a printer in the home; this printer is capable of printing rudimentary, usually multicolored, characters onto a 2D sheet of paper.

The future of printing goes well beyond this seemingly simple technology; we will soon be printing physical 3D objects. The 3D printer, otherwise known as an additive printer, will be able to ‘print’ any object that can fit within the length, width, and height of its laser-equipped arms; the user will be able to make three-dimensional, solid objects from digital files.

The first consumer 3D printers were released in 2012, but big corporations have been using these magic machines for decades for the purpose of prototyping. If they needed to make a spanner, a spare car part, an intricate widget, or whatever else tickled their fancy, they simply printed it out to touch it in real life. No theory, no spending hundreds of thousands of dollars to have it custom-made in a special factory somewhere far away, but created, tested, and demonstrated to management and engineering without lag time or exorbitant costs right there in the office, allowing many more innovative and riskier projects as a result of the cost savings. Before 3D printing, the shoemaker Timberland had to spend $1,200 and one week to create a prototype sole.Today, it takes them ninety-minutes and costs them $35. The airliner, EADS that makes the iconic Airbus A380 (the largest plane in the world), are printing shoe-sized titanium landing-gear brackets for use in their airplanes. Normally, such a device would be made via a process called subtractive manufacturing, which results in ninety-percent of the titanium being wasted (since you have to start with a square block and titanium ain’t cheap, and whittle it down to the final design). Additive printing is the complete opposite, which also allows more efficient structural changes and integrity. They eventually hope to print out an entire aircraft wing! The savings in material and reduced time to production is enormous. 3D Systems (which invented additive manufacturing twenty-five years ago), is involved in a consortium printing hundreds of parts for the F-18 and F-35 fighter jets: clearly machines that demand the utmost precision in their capability. If it’s good enough for some of the most expensive machines in history (between $154 to $236.8 million a pop), then surely our home accessories and cars will be more than satisfied.

Slightly off-topic, something similar—decrease in cost and production time—will soon be happening with semiconductors (used in computer chips, batteries, and solar panels), where a new manufacturing process has been demonstrated, in which gallium arsenide semiconductors are assembled by growing them from freely suspended nano-particles of gold, instead of using the more traditional subtractive methods from silicon wafers, accelerating their creation by thousands of times. This tech, while not explicitly part of the 3D manufacturing framework operates on similar principles (by reversing the subtractive process) and is expected to be operational within two to four years, and will result in just as significant a cost-savings. By the end of this decade, computer chips will cost about a penny, and they’ll be used with throw-away mentality. We’ll be able to afford to put them in everything; clothes, tabletops, walls, you name it. A simple way to think of the increasing speed, efficiency, and clockwork reliability of the exponential increase of computers is like this: we are using computers to build faster computers, which we then use to build faster’er computers and so forth. (The same goes for 3D printing, which is why I went on this little detour. )

Back to 3D printing. The manner in which additive printing works is quite simple. An object (encoded as a digital file) is selected and sent for printing. The printer then goes to work building it one two-dimensional layer at a time from the ground up, using (in the first mainstream devices) a plastic resin that is laid down and heated with focused lasers, solidifying in the process. This process continues, layer by layer, creating multitudes of two-dimensional layers that gradually build up until printing is completed, and a three-dimensional object stands revealed. The size of the object is limited only by the 3-Dimensional space of the arms, though nothing will stop you from assembling objects piece-by-piece; such as a table, chair, or plane.

This technology, once it comes down in price for the mass-market will explode. The first ones that are rolling onto the consumer shelves are of the world of plastic, and therefore, only able to print, or create, products in plastic. With time, silicon, metals, et al. will be added to the mix, then eventually all of them will be combined in one to be able to print electronics, watches (Rolex anyone?), cars, food, drugs, and has recently been used to print human body parts; a human lower jaw, blood vessels, bones (five-to-ten years away),  teeth, and even DNA. The tech that goes into making the 3D printer, is subject to Moore’s Law. (Doubling of price-performance per 12-18 months, so ten years from now, they will be approximately one-thousand more powerful and intricate.)

These products are functional now; the one obstacle that remains is of making them mainstream. Something that technology is exceptionally good at doing. Forty-years ago, a normal (or back then, state of the art) computer was a building in size and cost $100 million. Today, a phone a million times smaller and a thousand times more powerful is probably in your pocket as you read this. This is known as Moore’s Law. Every twelve-to-eighteen months, the computational capacity doubles for the same price (adjusted for inflation), and 3D Printers are subject to this exponential increase in capability without a subsequent cost increase, and if you forego the increased capability, the cost of any current technology becomes half the cost in the same time frame. The same goes for solar panels, every year they become roughly thirty-percent cheaper (compounded), and fifty-percent more efficient (also compounded). Since 2009, solar costs have dropped seventy-five-percent, even while contending with the Global Financial Crisis.

Decades ago, Bill Gates stipulated his dream of having a computer in every home. The new dream is to put a 3D printer in every home and with the exponentially declining costs and increasing capability, we may be no more than a decade or two from this goal.

 

“The rate at which the technology is getting faster is itself getting faster.” ~ Peter Diamandis (CEO)

 

Maybe one day you’ll break a mug and gasp; it was your favorite mug. There are no more stores to sell such antiquated mugs because you’re living in the future! Who knew? So you jump on your computer, open AutoDesk (or some other consumer-friendly program), and design the same mug again, perhaps adding your signature this time or a picture of your girlfriend. Perhaps you made a digital backup of it, or took some photos that can now be converted into its digital equivalent to save the work of designing it again. With that finished, you send it to your printer, and off it goes layering, resining, and laser’ing your new mug, layer by incremental layer. Voila! A few minutes later, you’re making yourself a new cup of coffee. Imagine the possibilities: toys, tables, chairs (assembled piece-by-piece), plates, cutlery, bikes, cars, or anything else you have in your home, or that you can dream of. Recently, a pair of students printed off a plane part-by-part, assembled it themselves, and flew it at a hundred-mph (it was unmanned), at a cost of $2,000. Just five-years ago, a plane of similar size and capability would have cost $250,000 to build. Imagine what we will be able to create five-years from now when it is another order-of-magnitude cheaper to print and create. This technology is taking a hammer to the rich-poor divide, though it will not completely obliterate it. (Something else will, and I’ll get to it in a few paragraphs.)

Now, some might think that we will be utterly dependent on the companies who will make these nifty, life-giving contraptions, much as we are to the energy conglomerates now, but technology sometimes has a funny way of being made of pure awesomeness. When your printer nears the end of its life, you’ll be able to print yourself a new one. Todays 3D printers can print off seventy-percent of the parts to create a new model of itself. Five to ten-years from now, it will print one-hundred-percent of its own parts. It will be next to impossible to monopolize this technology, and even if safeguards were built into it, the hacker mentality will sprout up to circumvent such restrictions. You will more than likely be reliant on someone for the printer cartridge. Though, the feed should be easy enough to make so that a distributed market is created out of it, with no one entity having a monopoly.

Economics will be thrown out the door in so violent a manner; it will be the Italian Renaissance all over again, with far-reaching consequences: negative in the short-term for working people, positive in the long-term for everyone. Look at what the printing press did to the dark ages. Gunpowder to knights. Cars to horse carts. Planes to boat travel. The cellphone to the landline. The CD burner  (and Napster and Bit-torrent and consumers and artists) to the music industry. iPads to netbooks, and I leave you with the homework of imagining what will happen to every industry once the 3D printer is mainstream.

iPrint, therefore I am?

The most groundbreaking example of this technology is what the Italian Enrico Dini, has set his life’s purpose to. He can print a house! Albeit only a small one for now as the technology is still in its infancy, but again, this technology exponentially increases in capability, so we won’t have to wait long. Imagine having the home of your dreams built exactly the way you want, to exacting specifications, with high-quality materials, no human labor, and no supply chain (save the cartridge). What previously required the work of a dozen men working tirelessly for months could be done by one man in one day! No more living with your in-laws while you wait for your dream home to be completed. Not to mention that within the three-dimensional reach of the printer, you will not be restricted to the boxy walls and triangular roofs we’ve grown accustomed to. All number of shapes, contours, and home-types will be possible. Want an upside-down fish bowl home? No Problem. Wavy home? Easy. Roman Pillars? Call me when you’re ready to start using your imagination. Again, numerous prototypes of 3D-building homes (also called contour crafting) exist around the world in many companies and inventors. What remains is bringing it to the mass-market, and I imagine the developing world will be the first to embrace it. Just as they did with mobile phones, completely skipping the antiquated resource-intensive landline telephone. There are several other people and companies pursuing this technology. One among them, Professor of Systems Engineering at the University of Southern California, Behrokh Khoshnevis, though he calls it by the latter name, Contour Crafting. (I highly recommend you watch his TED Talk on the subject. Google ‘contour crafting TED’, but suffice it to say; plumbers, electricians, and constructions are going to have a tough-time of it.)

3D printing, Additive Manufacturing, Contour Crafting, or whatever we want to call it will snatch from the future and bring into the present an economy with very little waste, unimaginable possibilities, huge economic and energy savings, and most importantly very little lag time between creativity and creation (see quote below). This will allow the ingenuity of humankind to spring forth and create a beautiful world not bound to the rules and bylaws of monopolistic practices that have manifested themselves as a result of the consolidation of knowledge, influence, and power into the hands of a few, and subsequent protection of that monopoly through government conscription. Human creativity, in short, is becoming unbounded, and technology is the great equalizer that makes it so!

As the futurist Jason Silva ruminates in his short-form video, Imagination, “If you were able to look at human progress, as if through a timelapse of the last hundred years, you would see that literally thoughts spill over into the world in the form of technology. We engage in feedback loops with that technology, which then extends our ability to instantiate new realities.” 

 

Nanotechnology

Nanotechnology is considered to be the technological Holy Grail. If nanotechnology were to fulfill its ideal, then every single material problem we’ve ever had or ever will have will disappear, or simply not exist to begin with. Nanotechnology, in its simplest form, is building with computers on an atomic level, usually between 1 and 100 nanometers (nm). To put that in perspective, the DNA double helix is approximately 2nm wide. It is essentially creating, or building things a few atoms at a time from the bottom up, with zero waste.

Some examples: carbon nanotubes assembled in this fashion into solid metallic-like objects are one-hundred times stronger than steel, yet six times lighter. Someday in the future, cars and airplanes will be made with them, increasing fuel efficiency and passenger safety. Some scientists want to build a space elevator with this miraculous substance reaching 22,000 miles into space. The cost of putting objects into space would drop from thousands of dollars per pound down to a few tens of dollars, which would begin a third space renaissance (Apollo and SpaceX were the first two)—and I’ll stop using renaissance now.

In medicine, current research is pointing to nanobots programmed to attack only cancerous cells and viruses, carrying the required medicine directly to the point of contact, thereby affecting only the targeted unhealthy tissue, leaving healthy tissue nearby unaffected—no more balding chemotherapy patients! The bandana industry is going to suffer—rally the goldfi…uh politicians to protect their jobs! And as I alluded to in Fear of Fission, we can get down into the nitty-gritty radioactive waste, rendering inert—or isolating—the oxidative ions that are stripped away forming the radiation, leaving behind an inert, harmless substance.

Nano-tech surgery is on the horizon. Infinitely more precise and able to perform functions such as diagnosing and correcting internal disease or trauma, free of slips of the surgeons’ hands, potential infections, and without need of surgical cuts, all from the inside out. (And if you recall from Future of Food, antibiotic super-bacteria are evolving that will make surgery all but impossible potentially within the next decade.) That is, individual intelligent nanobots will be able to travel to the trauma; assess the damage, and repair only the affected tissue, while skipping over healthy cells. We will potentially enter an age where life expectancy takes another huge leap, much as it did in the twentieth century, from a worldwide average of forty-years to kissing eighty years, and in some parts of the world, moving beyond. It’s helpful to note that in twenty-five years, computers (nanobots as we may call them then) will be a hundred-thousand times smaller than the iPhones and Android smartphones we use today, as well as being a billion times faster, i.e., they will be the size of blood cells.

We may even reach a point where a person never dies of old age and is kept in optimal health by an array of nanobots floating throughout his or her body, attaching to cells and repairing them daily. We could stay twenty-five forever! Consider this quote by the Foresight Institute:

 

“Nanobots work like tiny surgeons as they reach into a cell, sense damaged parts; repair them by reformatting new atoms, and leave. By repairing and rearranging cells and surrounding structures, nanobots can restore every tissue and bone in the body to perfect health – including replacing aging skin with new, resilient skin, restoring youthful looks and good health.”

 

That’s a future they think is possible by 2020. Eight short years away, but a more realistic timeline by Ray Kurzweil, inventor and futurist, is the late 20s. I’m already counting down the days because as a non-theist heathen, there’s no heaven waiting for me, just a boring eternal darkness where I can’t even get bored—how boring! Now, to not accidentally die in the next eight to eighteen years is the task I have given myself…

Don’t make the mistake of thinking this technology is only for the rich. The concept of poor and rich exists only in environments of scarcity, as does the concepts of the trading and price. While the rich will most surely have first access to miracles such as nanotechnology, as they will be the investors—so thank you rich people!—the concept of nanotechnology is that each nano-computer, or nanobot, can turn anything else into another nano-computer. It defies the very laws of scarcity and economics that we live in today.

One nanobot becomes two, two nanobots becomes four, four become eight, eight become sixteen, sixteen transmute into thirty-two, and forty-four steps later, thirty-two is 5,600,000,000,000,000 nanobots. Try assigning a price to that!

Now, there are numerous dangers in having unrestrained nanobot replication in the world; known as The Gray Goo Scenario, in which the biomass of the Earth is turned into dead matter. The envisioned controls are a bit beyond the scope of this book (as well as my limited expertise), but such control systems would more than likely involve Artificial Intelligence and centralized replication servers that keep things in check by doling out permission or denial requests for nanobots in light of the predisposed environment and usage. Perhaps using quantum cryptography security systems: unbreakable codes generated by quantum entangled states, which take advantage of a quantum state known as quantum superposition, where a change in one particle (after it has been entangled with another), invokes an instantaneous (and equal) change in the other entangled particle; thus if an eavesdropper listens in, he or she irreparably change, by way of observation, the quantum state. The security system is just a guess on my part, and there will undoubtedly be many layers of increasingly difficult to crack security to protect us from the harmful effects of nanotechnology, and ensure only the positive effects are unleashed into the world, to the benefit of all. For a more in-depth primer on this, exploring in far greater detail, the pro’s and con’s of nanotechnology, Ray Kurzweil’s, The Singularity is Near, is an excellent read on the subject (as well as on biotechnology, additive manufacturing, increasing computational capacity, turning the Universe into God et al).

The potential of the human race is being realized, and it will usher in a future brighter than any one of us can imagine. There will be pains along the way, especially economic (though due to technology, per-capita income worldwide has tripled in the last century), and the usual social unrest that accompanies such pain, but technology, as it has done so in the past, is the only thing that will alleviate us from the woes of the twentieth century, and all those that came before it, and the only thing that can provide a beautiful life to all seven billion people on this little blue rock, so it must be embraced with open arms and from a platform of knowledge, as opposed to ignorance, as is usually the case when we enter turbulent, exciting times. It is, and perhaps always will be, easier to invent new technologies, than re-programming the irrational hearts and rationalizing minds of billions of people.

 

We didn’t stay in the caves, we didn’t stay on the planet, and we won’t stay with the limitations of our biology.” ~ Ray Kurzweil (Inventor)


Note: the book is fully sourced, but because of the writing program I use, the links don’t transfer over to WordPress, and I can’t be bothered inserting them in one at a time. The final book will have all the relevant sources in the proper locations.