Just finished my first draft of the book and illustrations. It will need some more work, but if you know anyone in the children’s book publishing business, let me know!
(copyright 2018 by Russ Hodge)
March 10, 2018
To the editor.
Dear Sir(s) and/or Madam(s),
My compliments on the zesty new editorial direction your journal is pursuing at a time when the print media are generally regarded as a horse so dead you could never beat it back to life. Not that a scientist would ever beat a horse, of course, unless there was some therapeutic purpose to it (e.g., cardiac massage). Which in this case would be senseless, because the horse is purely metaphorical – not that my colleagues hold a metaphorical horse in any less regard than a real one. But I digress.
It is bold indeed of you to publish in serial form the epistlary war that has been raging for so many months between Prof. Dr. Marius Linksunteraermer and Dr. Dr. Vincenzo Gambini. I am so caught up in it that I find myself waiting for each edition of your journal in a state of heightened anticipation bordering on erotic arousal (which is really saying something, at my age). There’s nothing better than watching two scientists go at each other tooth and nail, rapier and switchblade, particularly when they are arguing about something of no significance whatsoever.
To summarize: about a year ago you published a paper by Gambini’s lab. Linksunteraermer had no issue with the work or its results, with the exception of the legend to one figure. As customary, you gave Gambini the opportunity to respond and the two letters appeared side-by-side. Naturally Linksunteraermer felt compelled to respond to the response, and naturally Gambini provided a rebuttal. Now, of course, the Djinn had been let out of the bottle and there was no putting him back.
Here the figure and legend as they originally appeared (vol. 139, p. 1206):
Fig 6a. We collected ca. 1 billion pieces of data as summarized above (for the complete list see the Supplemental Data). The results fell into four distinct groups which could be cleanly separated into the quadrants 1, 2, 3 and 4 as indicated. Since printing 1 billion dots would have taken about 6 years, we simplified the diagram by selecting the most representative dot in each cluster (i.e., the one closest to the middle), then rounded it off to the nearest integer for plotting on the chart. The result confirms our hypothesis that four distinct mechanisms are at work in the system. (Note: What appears to be a single point at position 0,0 is actually four median points, one in each quadrant: after rounding rounding off, they overlap.)
What followed is best captured by citing a few passages from the exchange. Linksunteraermer’s first letter expressed polite skepticism that 1 billion data points could be so cleanly sorted into four distinct groups. “There must have been outliers,” he protested.
Gambini provided the following “clarification”: “We could cleanly distinguish the datapoints appearing in quadrant 1 from those in quadrants 2, 3, and 4 because the data point with the highest value for y in quadrant 1 lay higher than any of the y values for the data points in quadrants 3 and 4, although not necessarily higher than the highest y in quadrant 2, nor any of the x values, of course, and the lowest point in quadrant 1 also lay higher than the highest points in quadrants 3 and 4, but not that of quadrant 2; and the lowest y value in 1 also lay higher than the lowest points in quadrants 3 and 4, although it was not necessarily lower than the lowest point in quadrant 2. Taken together, this implies that both the highest and lowest values for y in quadrant 1 were higher than the y values for either the highest or lowest points in quadrants 3 and 4, although the highest value for y in quadrant 1 may have been lower than the highest in quadrant 2 and the lowest higher than its lowest. That accounts for the y value. The values for x behaved exactly the same way, which spares me the task of having to repeat all of that – provided you make the following alterations: exchange the terms ‘highest’ and ‘lowest’ with ‘farthest right’ and ‘farthest left,’ respectively, and wherever the terms ‘quadrant 2’ ‘quadrant 3’ or ‘quadrant 4’ appear in the description above, replace the 2 with a 4, and replace the 3 with 2. Be careful about the order in which you do this so that you don’t change the 3 to a 2 and the 4 to a 3 and then change the resulting 2 to a 4 and 3 to a 2; only one transformation may be applied per quadrant.”
While mere mortals might have been daunted by this answer, Linksunteraermer’s response came a scant week later: “You’ve completely missed the point. My question is, how can you be sure that none of the points plotted in quadrant 1 actually belongs to one of the other experimental groups, which would mean you would have to cluster it with datapoints in quadrants 2, 3, or 4 rather than grouping it with the other points lying within quadrant 1?” To which Gambini replied, “Because by definition a point in quadrant 1 lies to the right of the vertical axis and lies above the horizontal axis (i.e., both x and y have positive values), and any point failing to meet both criteria must lie in one of the other quadrants, depending on whether it is positive or negative, also by definition,” to which Linksunteraermer retorted, “But that doesn’t address the question,” earning the following rather snarky reply from Gambini: “Perhaps you failed to understand the answer,” and from there the discussion deteriorated into an exchange of personal insults, including some rather colorful and highly creative references to anatomical features and their functions in reproductive biology, in some cases across a species barrier, which you faithfully printed. I will refrain from going farther to avoid spoilers, but I ensure anyone who cares to read the letters that they will find entertainment of the highest order.
I have not written this letter as a means of getting drawn into what will surely end in at least one homocide. Wagers are being made on who will survive, throughout the research community. My money’s on Gambini; the intellect responsible for that figure legend and the subsequent explanation will not go down quietly.
No, all of this reminded me that it was time to finish a little project I started a few years ago demonstrating that any attempt to plot data onto an x-y axis, the type that Gambini used, is doomed because of a fundamental flaw of reasoning that renders them all meaningless.
I enclose a copy along with this letter, which I humbly submit to your journal for consideration.
With warmest regards,
Wilford Terris, Prof. emeritus (at large)
An inherent flaw in Cartesian coordinate systems is associated with death by elevator and accurately predicts the end of civilization as we know it
by Wilford Terris, Prof. emeritus (at large)
Introduction: a brief history
Scientists and normal people, too, are familiar with the practice of plotting information on a system with an x-y axis. The formal designation is the Cartesian coordinate system (or CCS, fig. 1). It is named for half of the French mathematician René (Des)cartes (fig. 2), who invented it but later disowned it, at least partly – apparently the latter half, to judge by its name. He did not, as some claim, cry out on his deathbed that it had been inspired by the Devil; his disappointment was purely financial. The coordinate system became the Microsoft of the 17thcentury and would have made Descartes as rich as Bill Gates, if you imagine Bill in a powdered wig and pointed purple shoes with large buckles, if Descartes had remembered to patent it. As a result of his failure to do so, he never saw a penny (or sous, as the French call it) while others made a killing. By the time he came up with his second major invention, the “mind-brain dichotomy,” he’d learned his lesson and patented it right way. Unfortunately it turned out to have no practical value whatsoever, except as a sort of occupational therapy for philosophers. Descartes became pathologically bitter and died in poverty while teaching in Sweden on a visiting professorship contract in the company of a female robot that was either a life-sized replica of his deceased daughter or a mechanical sex toy.
Fig. 1. Cartesian coordinate system
What made the CCS so popular was that a portable version (called the iCCS) could do just about everything that smartphones are used for today. You could do addition and multiplication on it, and if you were really clever subtraction and division. And what is GoogleMaps, really, other than an x-y axis with a few details filled in? You could use it as a chessboard, and even play the game “Pong” on it by calling out equations such as “y = (x-1)!” to describe the trajectory of the ball. The biggest-seller, of course, was Battleships, which hadn’t made much sense at all before the arrival of a coordinate system. When a few customers complained that you couldn’t watch cat videos on the iCCS, the King had them beheaded, to the delight of all.
Fig. 2 René Descartes, (1596, 1650) inventor of the Cartesian Coordinate System
First scientific applications
With the arrival of the Cartesian coordinate system, people began counting things they’d never paid attention to. A cause could now be connected to its effects, supported by actual data. On the x axis you could plot the number of beers a person drank, for example, and on the y axis the number of times they fell down. A situation in which both numbers always rose together or fell together, like the drinking-and-falling effect, was called a correlation (derived from the French word for incest) and was generally assumed to indicate a causal relationship. This led to landmark publications in journals such as Nature with titles such as, “Drinking alcohol makes people drunk.”
Another revolutionary scientific discovery to emerge from the CCS was the concept of being fat. Some people had always been thicker than others, but it didn’t seem to have any practical consequences, so no one particularly noticed. There had been a theory that if a rotund person jumped from a high place, he would bounce upon hitting the ground. Galileo proved this wasn’t true in the first human clinical trial on falling, carried out at the leaning Tower of Pisa. The two colleagues he used as the test subject (250 kg) and the control (50 kg) not only hit the ground at the same time but produced nearly identical blots of a gelatinous composition, considered the origin of the scientific practices of obtaining gels and blots. He concluded that a person’s mass had no practical consequences, in terms of its rate of falling or the degree of flattening upon impact, so science agencies struck research on body mass from their funding agendas, and it wasted away for more than a century.
Then a French physician named Jacques Derrière got tired of his wife asking – for the 1000thor 1,000,000thtime – “Does my butt look wide in this dress?” Before offering any conclusions he thought it might be wise to gather some scientific data. He began measuring the height and weight of everyone he met, plotting one against another on an x-y axis, leading to the first first Body Mass Index, or BMI. The results were interesting. Most people’s measurements fell within a particular zone of the chart. A few points lie far outside this zone, generally people of normal height but with the weight of a grand piano, or a small whale. Since these were frequently the same people who drank lots of alcohol and fell over outside in the street, Derrière called them outliers.
Derrière packed one of them in his wagon and carted him off to the hospital for dissection. After a few pokes with a needle to ensure the putative Scientific Breakthrough was dead, he made his first incision in the gut, looking for balloons or some other mechanism that would make a fellow swell up that way. What he found was a blubber-like tissue that he described, in his landmark paper, this way: “Upon close examination, the patient was determined to be Full of Adipose Tissue, or FAT.”
He extended his plot of BMIs to other species, and the next time his wife thrust her impressive posterior toward him and demanded an answer, he was ready with precise data. “Your butt has the BMI of a small elephant,” he told her, upon which she immediately rewarded him by elevating him to the rank of Martyr to the Cause of Science. His data, thankfully, survived.
A tool for engineering
The BMI charts from Derrière rapidly became valuable references for experts in disciplines beyond butts, such as the engineers who designed the second elevator ever to be constructed. Elevator design was not yet a science, because there was only dataset, from the first elevator, and to become a proper field of science it is generally considered necessary to have at least two. The results of the first elevator experiment are recorded in the Cartesian coordinate chart below (Fig. 3).
(Image being processed)
Fig. 3. First human trial of an elevator. The x axis represents
an individual’s lifespan (measured in seconds from the moment
of boarding the elevator); the y axis records the weight of each
person on board. Based on this chart, engineers concluded that
there was no association between an individual’s BMI and risk
of death by elevator.
Although later no definitive cause could be established for the premature termination of that experiment, the frayed ends of the rope were suggestive of some sort of separation event. While no one disputes that the effect on 50 passengers who had eagerly volunteered for the maiden voyage of an elevator was rather negative, on the whole the experiment was considered a success: their cabin had ascended nearly 3/4thsof the height of the Eiffel Tower before abruptly reversing directions.
The Assistant Head Engineer, whose social status had not allowed him on board, suspected that the thickness of the rope might have been a factor. He could have benefited from the calculations of the Head Engineer, who had been prominent enough to be given a spot on the historic flight, but his notes were an unreadable scrawl and the Head Engineer himself was no longer available for consultation. Had he provided an accurate estimate of the weight of the passengers into his calculations? Had he realized they might be carrying things in their pockets – loose change, car keys – or have body piercings that would add to the weight? For his new caculations, the Assistant Head Engineer could draw on Derrière’s BMI tables to estimate the total weight that a rope would have to hold. At the last moment he remembered to add the weight of the cabin, which was about a ton. This gave a figure that could be plugged into an equation to yield the guage of the rope that would be needed:
more weight = thicker rope
The purpose of this piece is not to cause panic; if that had been my intent I would have started with alarming statistics about how many millions of people die in elevator accidents every year, then claim that there is currently no way to diagnose an individual’s risk of dying in an elevator accident based on molecular markers. And the number of deaths just keeps accumulating. (Indeed, how could it get smaller? You can’t subtract any from the bottom of the pile.) There are no effective treatments, because the first symptoms (a sensation of falling, usually accompanied by loud vocalizations) appear just moments before death. By that time it’s too late to unpack the parachutes.
Later experiments showed that parachutes don’t work in a falling elevator anyway, due a localized disturbance in the laws of physics that Newton called the “temporary exception for a parachute in an elevator.” So there is an urgent need for further research to design novel rational therapies for a condition that causes morbidity and mortality not only for the victims, but also for anyone who happens to be standing at the bottom of an elevator shaft at the wrong moment.
Today’s engineers draw on the Body Mass Index and a plethora of other Cartesian coordinate systems in the design of practical equipment such as elevators. Modern elevator science has progressed far beyond the state of affairs which reigned at the second clinical trial of an elevator, for which it was surprisingly hard to find volunteers. In the end they substituted cows, replacing the Body Mass Index with a Bovine Mass Index, in one of the first historical examples of replacing humans with animal subjects. That experiment was a success; all of the cows reached the top alive. What happened when they stampeded out of the cabin and found themselves on the rather small viewing platform at the top of the Eiffel Tower is another matter, but is irrelevant to the study’s results.
As I mentioned, I do not wish to cause panic. But let us imagine, purely hypothetically, that someone were to discover some inherent, fundamental flaw in the Cartesian coordinate system. Wouldn’t this call into question the structural integrity of every real-world device designed using a CCS? Every elevator, bridge, floor, cieling, zipper, button, gas mileage, the location of every street and town? Even the Earth itself wouldn’t be in the position we had assumed it to be. What about the social institutions, government, and the premises on which society is founded? Yes, CCSs have been used in creating these institutions as well. Thinks of the consequences? What would happen if the weaknesses in every CCS caused them all to fail at the same time, perhaps through the activity of a virus that has been lying dormant inside a glacier, and is suddenly revived through global warming? This would be likely to happen at a time we can’t predict, because all of our estimates depend on the very type of CCS that is hastening toward a collapse?
Problems with the Cartesian coordinate system have been known for many years, but papers demonstrating them rarely reach the pages of journals. The central dogma of the CCS is the foundation of the entire system of impact points by which editors and the other Plutocrats of science justify their power; any challenge to the model calls their authority into question. But evidence from outside the mainstream has now accumulated to the point that it is finally spewing through the cracks, while the status quo no longer has enough fingers to plug all the leaks. There is a major paradigm shift in the making. It is usually heralded by portents: a plague of locusts, a weird person who might be a zombie, inexplicable changes in your partner’s mood, your cell phone battery draining faster than it should… If you notice these or similar signs, take appropriate measures. You can never go wrong laying in a nice assortment of canned foods, but firearms are not advised. A paradigm shift cannot be countered by conventional weapons.
(coming soon, stay tuned!)
If you liked this article, you will probably like:
This is the first of several pieces in response to questions I have received about my recent lengthy article (too lengthy!) on “Ghosts, models and meaning: rethinking the role of communication in science.” Read the full article here.
Can you give me a succinct definition of the “ghosts” you’re talking about?
There are a lot of contexts in which science communication somehow fails because an audience doesn’t get the point or understand a message the way it was intended. The naïve view of this is that scientists just know a lot more about a specialized topic than people from other fields or the public. That happens, but it’s rarely the biggest issue in communication – and it doesn’t explain why people have problems writing for experts in their own field.
When I began teaching scientists to write I came across a lot of content-related breakdowns that were hard to understand. This got so frustrating that I finally decided I had to systematically analyze the problems I was seeing. That took about four years, and “ghosts” emerged as one of the most frequent and important issues.
Ghosts originate from many things: concepts, frameworks, logical sequences, other patterns of linking ideas, theories, images and so on. What unifies them is that the author has something in mind as he composes a message, and it is essential to understanding what he means – but he never directly articulates it in the message itself. He may not even be aware of it. Since it’s nowhere to be found in the message, it’s invisible; if the reader doesn’t sense its presence, or can’t easily recover it, it disrupts his understanding of what the author means.
This is true of all kinds of communication, but the natural sciences have some special ways of assigning meaning to things that really need to be taken into account when you’re planning a message or trying to interpret one. If you don’t, you’re setting yourself up for misunderstandings.
You mention models again and again – why are they so central to these misunderstandings?
Among the most significant and disruptive ghosts in science are various models that are used in formulating a question or hypothesis and interpreting the results. Most studies engage many types and levels of models. Scientists obviously recognize this; as Theodore Dobzhansky said, “Nothing in biology makes sense except in the light of evolution.” But there is a big difference between acknowledging this and demonstrating how something like evolutionary theory reaches into very basic practices, such as how scientists name molecules. And there’s a big invisible ghost behind Dobzhansky’s statement – something he doesn’t explicitly state but is essential to understanding what he means. Evolution itself is based on even more fundamental principles of science, so if you’re talking about the theory, you’re also talking about them. In fact, most “debates” over evolution are actually arguments about even bigger things, and if you don’t confront that level of the disagreement, it doesn’t even really make sense to discuss whether species change or go extinct or the other topics that these discussions always get mired down in.
What are those specific ghosts?
I think there are two, and they are so basic that they distinguish science from other ways of thinking about things and assigning them meaning. I call the first one the principle of local interactions: if you claim that something directly causes another thing, you either prove or assume that the cause and effect come in contact with each other in time and space, or are linked by steps such as a transfer of energy that follow this rule. So to make a scientific claim that a child inherits traits from its parents, you have to find some direct mechanism linking them, such as the DNA in their cells. It is directly passed to children from the reproductive cells of their parents, and it’s the blueprint that creates bodies through its transcription into RNAs and their translation into proteins.
The second principle applies this type of causality to entities as complex as organisms or entire ecospheres and declares that the state of a system arises from its previous state through a rule-governed process. And it will generate future states through the same rules. You may not know what they are, but you assume they are there, and a lot of scientific work is devoted to creating models that will expose them. If you follow this principle you can observe what is going on in a system right now and project it far into the past and deduce its previous states. This is the source of the Big Bang theory in astrophysics; it’s the basis of geology, and when Darwin applied it to life he got evolution. Extending the principle into the future is the basis of the experimental method used to determine whether your model of a system is accurate enough to work with – if something in an experiment violates predictions made by the model, you have to revise it.
Anything that violates the principle of local interactions would be considered non-scientific. That’s the case for extrasensory perception – until you can demonstrate that some energy passes from one person’s mind into another’s, you can’t make a scientific claim for its existence, so you have to look closely into whatever model of causality led you to claim it might exist. And the second principle implies that there are no discontinuities – you can’t create something from nothing. Miracles and the fundamentalist account of creation violate both principles.
If you can’t agree on these two things, it makes very little sense to discuss details of evolution that derive from them, because differences in your very basic assumptions will make it impossible to reach any sensible common ground – or even define some of the terms you’re talking about. So these principles are ghosts in “debates” on this topic, and they are the things you need to debate, providing you can do so fairly, with intellectual honesty and integrity.
You came up with this concept of “ghosts” while working on texts by students and other scientists. Why are they a particular problem for students?
Active researchers are deeply engaged with their models; most projects take place in a fairly exact dialog with models you are either trying to elaborate on by adding details, or extend to new systems, or refute through new evidence. This makes models very dynamic, and there’s no single reference on the Internet or wherever where you can go and find them. In biology virtually every topic gets reviewed every year or two, which is an expert’s attempt to summarize the most recent findings in a field to keep people in a field more or less on the same page. That’s the group that a lot of papers and talks are addressed to, at least most scientists think that way – and they assume the readers will have more or less the same concepts, models and frameworks in mind. Anything that is widely shared, people often fail to say – they think they don’t need to. And it’s impossible to lay out all the assumptions and frameworks that underlie a paper within it – you can’t define every single term, for example. So these become ghosts that aren’t explicitly mentioned but lie behind the meaning of every paper. The two really huge basic principles I mentioned above are rarely, rarely described in papers.
And even the details of the models more directly addressed by a piece of work – the physical structure of the components of signaling pathways, or all the events within a developmental process – aren’t mentioned very often. Those models are embedded in higher-level models, and the relationships in this hierarchy are not only hard to see – there’s no single way of explaining them. Scientists sometimes work these things out fairly intuitively as they extend the meaning of a specific set of results to other situations and higher levels of organization.
Now imagine a science student who is absorbing tons of information from papers like these. As he reads he’s grappling with understanding a lot of new material, but he’s also actively building a cognitive structure in his head – I call it the “inner laboratory, or cognitive laboratory.” It consists of a huge architecture in which concepts are linked together in a certain structure. The degree to which he understands a new piece of science depends on how that structure is put together, and where he plugs in new information. If the text he’s reading doesn’t explicitly tell him how to do this, there will be a lot of misinterpretations.
How can his professor or the head of his lab tell whether a scientist under his supervision is assembling this architecture in a reasonable way? You catch glimpses of part of it in the way someone designs an experiment, but I think the only method that gives you a very thorough view of it is to have the young scientist write. That process forces him to make the way he links ideas explicit and put them down in a way you can analyse each step. In writing – or other forms of representation, such as drawing images or making concept maps – you articulate a train of thought that someone else can follow, providing a means of interrogating each step. Most texts are pretty revealing about that architecture; if you read them closely you can see gaps, wrong turns, logical errors, and all kinds of links between ideas that a reader can examine very carefully.
The problem is that in most education systems in continental Europe, in which most of the scientists I deal with were educated, writing is not part of the curriculum. Whatever training they have is done in all sorts of ways, and the teaching is usually not content-based. Instructors use all kinds of exercises on general topics, but that learning doesn’t transfer well to real practice. Why not? Because when you write about a general theme, your knowledge is usually arranged very similarly to that of the teacher’s and any general audience. In your specialized field, on the other hand, your knowledge is likely to be very differently arranged, and that’s where the ghosts start to wreak real havoc on communication.
So ghosts aren’t just things that scientists leave out of texts – they’re also phenomena that arise from the reader or audience…?
Absolutely – they arise from differences in the way a speaker and listener or a writer and reader have their knowledge organized. That can happen in any kind of communication, but in science it’s actually possible to pin ghosts down fairly precisely. In political discussions or other types of debates there aren’t really formal rules about the types of arguments that are allowed… But if you know how meaning in science is established, you can point to a specific connection in a text or image and say, “To understand what the scientist means, you have to know this or this other thing.” Again, since neither of you can directly see what’s in the other’s head, a reader may not guess that some of the meaning comes from very high levels of assumptions, or a way of organizing information that you’re not being told. And some have been digested so thoroughly by scientists that they’re no longer really aware that they are there.
Some of the most interesting ghosts appear when you try use someone’s description of a structure or process to draw a scheme or diagram. I recently had to draw an image of how a few molecules bind to DNA because we needed an illustration for a paper. I thought I had it clear in my mind, but I ended up drawing it five times – each version incorporating some new piece of information the scientist told me – before I got it the way she wanted it. You learn an incredible amount that way.
A scientific text is often based on an image of a component or process that a scientist has in his mind. He’s trying to get a point across, and to understand what he means you have to see it the way he sees it – but if he leaves anything out, it’s easy to completely miss the logic. It’s like trying to follow someone’s directions… That works best if the person who’s giving the instructions can “see the route” the way it will appear to you, maybe driving it one time to look for the least ambiguous landmarks, or taking public transportation and watching exactly what signs are the most visible. And thinking it through with the idea, “Now where could this go wrong?”
Another thing you refer to is concept maps – you include several examples in the article. How do they fit in?
Concept mapping is a system invented by a great educator named Joe Novak; it gives you a visual method to describe very complex architectures of information. It’s extremely useful in communication, teaching, and analyzing communication problems. One reason it’s so important is that our minds deal with incredibly complex concepts that are linked together in many ways. Think of trying to play a game of chess without a board – that’s incredibly difficult, but a chess set is a fairly simple system compared to most of those that science deals with. There’s really no way to keep whole systems in your head at the same time. Making a map gives you a chance to see the whole and manipulate it in ways that would be impossible just by thinking about it.
But the real genius of this system appears in communication and its most precise form – education – where a teacher ought to understand what he is really trying to communicate, and how it’s likely to be understood by the students or audience. In most cases you’re hoping to do more than just “transmit” a list of single facts; you’re trying to get across a coherent little network of related ideas, linked in specific ways. If you do that successfully, the audience will leave with a pattern they can reproduce later. It might be a story, a sequence of events, or a metaphor – the main thing is, they have seen how the pieces are related to each other.
A great way to do this is to make a map of the story you’re trying to tell, and then make your best guess about how this information is arranged in the heads of your target audience. What can you realistically expect them to know, and what information and links are likely to be new? If you see the pattern you’re trying to communicate very clearly, and make a reasonable guess about how some type of knowledge you can relate it to is arranged in your audience’s head, you know what you have to change to get them to see things the way you’re hoping. In schools they’re teaching kids to make concept maps early on. Then before a lesson about something like the solar system, the teacher has the kids draw a map of what they think about the sun, moon, planets, and so on. After the lesson the kids make a new map – comparing the two tells you what they’ve really learned.
In your article you point out ghosts that come from schemes like sequences of events or tables…
A lot of scientific models consist of sequences of interactions between the components of a system. Those start somewhere and involve steps arranged in a particular order, and it’s important for the reader to have a view of the steps and that order in his mind. You’d be surprised how often scientists describe these processes in some bizarre order that doesn’t go from A to K, but starts at G, goes to H and I, then goes back to G and works backward to F, E, and D… Again, if you are already familiar with the sequence or pathway this is no problem. But if you don’t, you’re probably expecting the reader to try to assemble the process in some reasonable order. That may be possible through a careful reading of the text, but it takes far more “processing time” than a reader would need if the whole sequence were simply laid out in order in the first place.
Tables are interesting because a lot of experiments are designed with a structure that’s pretty much inherently that of a table. Say you have two experimental systems plus a control, and you apply two procedures to all of them. To make a claim about the results, you have to march through all these cases – basically a table that’s 3×2 or 2×3. Here again, you’d be surprised how many scientists’ descriptions skip over some of the cells of the table, mostly because the results aren’t very informative. Or they tell you, “Procedure A caused a 5-fold increase over Procedure B,” without telling you what happened in the control.
Both of these effects are due to a scientist’s failure to recognize the structure of the information he has in his head and is trying to present… Then he fails to present that structure in the text in a way that’s easy for the reader to rebuild in his own head.
You’ve said that ghosts are one component of a larger model you’re working on that reformulates the relationship between science and communication… What else is there?
A lot of the other points can be captured through an exploration of what I call this “inner” or “cognitive” laboratory of science. The really good scientists I know have a very clear understanding of their own thinking. They know the assumptions that have gone into the models they are using, and are aware of the limitations, where there are gaps and so on. That type of clarity usually translates into good communication, no matter what the audience.
One thing I found during this project that was very surprising was the extent to which writing and communication for all kinds of audiences was connected, and how addressing very diverse audiences could clarify thinking in a way that improved a scientist’s research. When you find a scientist struggling with clarity in a text, it usually means one of two things. Either a topic is not clear in his head at that moment, or it’s not clear in anybody’s head at this moment in science… That second case is very interesting because it means you can find interesting questions just through a very careful reading of a text, realizing that it’s asking you to build a certain structure of ideas. If you have difficulty, that means something. One of the basic strategies I used in working these things out was that problems are meaningful – they’re trying to tell you something about how good science communication works, or how scientific thinking works… usually both.
Speaking to a general public with really no specialized knowledge of a field can be a truly profound exercise for a scientist. It makes him interrogate his own knowledge in alternative ways. He has to come to a much more basic understanding of the patterns in his inner laboratory and apply different metaphors, trying to map that knowledge onto someone else’s patterns. Well, the cognitive laboratory is already metaphorical, based on concepts rather than real objects, and applying new patterns or metaphors to what’s in there is extremely interesting. It can suggest questions you’ve never thought of before. This means that tools that have been developed by linguists and communicators can be used as tools to crack open scientific models.
I’ve actually done this – used those tools to expose an assumption about evolution that everyone was making but wasn’t usually aware of. The assumption had never been tested, so my friend Miguel Andrade decided to take it on as a project, and put a postdoc on it. The results were really interesting, showing that there were a lot of cases where the assumption didn’t hold – and we got a published, peer-reviewed paper out of it. That was three years ago, and in the meantime I’ve been involved in a number of similar projects that have had a similar outcome. A communicator who pursues questions about meaning and language has a different set of tools to understand how ideas are linked in scientific models. You’re freer to apply slightly different metaphors and patterns to ideas; you may be more rigorous in perceiving assumptions; metaphors and other tropes help you see cases in which people are reasoning by analogy rather than strictly adhering to the system at hand.
So these ideas aren’t just a way to help people plan and communication better – although they certainly help in those tasks. In fact they are much more fundamental in scientific thinking. Understanding these relationships between communication and science is a pathway to doing better research, through a better understanding of its cognitive side. I’ve noticed recently, for example, a lot of cases where the way people are thinking of complicated processes is drifting away from the language they use to describe them. The language is conservative and it may be hard to adjust. But that will be essential as the models these fields are using move forward and become so complex that our minds – and our language – may not be truly able to capture them.
all cartoons free for use by citing
copyright 2018 by Russ Hodge
Today more cartoons on themes from Immunology…
All cartoons free for use, just cite copyright 2018 by Russ Hodge,