We humans are silly, irrational creatures. Neurologically, we make decisions based on bad or misleading assumptions, impractical expectations, and often raw, unfiltered emotion. Psychologically, we then try post-hoc justification of what we've already decided, often allowing ourselves to jump through ridiculous logical hoops to try and seem reasonable. If that fails, we might even abandon pretense at rationality and simply appeal to unknown or unknowable forces that don't have to obey the rigid laws of reality.

By all rights, we ought to still be living in caves and banging rocks together.

And yet, we're not. We live in sky scrapers and mansions, apartments and houses. We travel around our planet at hundreds of miles an hour, or off of it at thousands. We create languages that lead to novels and poetry, instruments that produce punk rock and symphonies, artworks that inspire great emotions.

Of course, we've also created weapons of mass distruction. We've severely imbalanced if not outright destroyed entire ecologies. We've perfected genocide, popularized prejudice, and fought for thousands of years in the names of so-called benevolent deities.

95% of what we do as a species is irrational to say the least. It's that last 5%, though, that must give us pause. That last 5% gives us science, and reason, and medicine, and technology... all the things we associate with progress.

And while hope is silly, irrational, a product of the first 95%, it is hope that drives us again and again to thinking that maybe, just maybe, that last 5% can, in the end, make up for all the rest.

Which brings us to tonight, and my own personal moment of irrationality: here's hoping that, in the coning year, every one of you finds enough benefit from that last 5% to make the other 95 worthwhile.

Happy New Year.




It is the method by which we perpetuate culture, passing on beliefs, ideas, or customs from one generation to the next. Traditions form the basis of cultural identity, the mortar with which a family, a city, a country builds its personality. They are often unassailable bastions at the core of history for a people.

And that's why they're dangerous.

Traditions are often immune to challenge, merely by virtue of being "traditions": doing something "because that's the way we've always done it", whether it's who carves the turkey, how we set up the Christmas decorations, or how we venerate people in the past. And while culture is often an excuse, "the past" is really the key point here, because tradition looks only to the past and only with longing in its eyes.

The hidden assumption in tradition is this: the way things were done before is the best way and will always be the best way going forward. That assumption is quite often wrong. If a behavior is, in fact, the best method, then it should be able to stand on its own without being labeled "traditional"; if it is not, then no amount of reverence for history should prevent the best choice from being made. "Tradition" implies that a bunch of desert nomads from 2000 years ago could know what's best for people living in hundred-story high rises. It implies that a handful of rich white men from a few hundred years ago knew what was best for computer gamers and gun owners today. It argues that, because something was done in a specific way at some point in the past, that way is now a priori the best way and should never change.

As Toscanini said, "Tradition is just the last bad performance."

There are reasons to value cultural, historical or social traditions in the context of history, such as teaching traditional art forms, languages, or even rites. But those traditions should never be divorced from the time period in which they arose or their own cultural histories: why things were done a certain way, what the justification was, the effects of that tradition on future history, etc. Traditions should be valued as artifacts in the same way we value physical artifacts: as curiosities and points of reference, not as relevant for today's world. We can appreciate the role of ceremonial dance and religion in the political and social framework of pre-modern cultures without feeling the need to revere them, the same way we can appreciate an old flint knife without feeling the need to give up stainless steel or carbonite.

History shouldn't be ignored, but neither should it be placed on a pedestal and worshipped. Good practice and good information can stand on its own without arbitrary enforcement: if something is useful, it is useful whether it is old or new. We study Aristotle, Euclid, Newton and Freud not because of tradition but because of the inherent value of their statements and ideas, even if we've proven them "wrong" (or at least restrictive) in the intervening years: they are stepping stones along the path to modern logic, mathematics, physics and psychology. Just as we can appreciate Bach and Mozart without limiting ourselves to Baroque music and abandoning the Romantic period, we can appreciate the trappings of history without feeling the need to continue them and, instead, improve or even abandon them as need arises.

As we approach a holiday season that is often draped in traditions, don't be afraid to abandon them. If you find yourself sad or depressed, it may be because you're trying to cling to an outmoded idea of "should" that no longer fits - a tradition that is no longer useful. Embrace whatever celebrations or routines that you find useful, even if they aren't "traditional": go out instead of eating in, invite close friends to dinner instead of family, skip the presents and donate to charity or spend time at a soup kitchen - whatever it is that you feel is a better fit for you and your life today, not what you've been taught "should" be done.

Tradition looks to the past, but, while we must remember what has come before so as not to repeat the same mistakes, the past is not a model by which we can live. Progress exists only in the future, and with it comes the most powerful force we know - the thing that can forever destroy the historical, repetitive, traditional grievances of life:



Seeing isn't believing

"Observation" isn't a direct action.

We see things because light bounces off of them and reflects into our eyes (biological processes aside). We often think of this as "seeing the object", but the key here is that light itself is a thing, and it's really the light you're seeing: you have no direct interaction with the object. You only interact with it through the intermediation of light.

All observation takes place like this: whether it's photons (light), electrons, or anything else, the only way to "observe" something remotely is to bounce things off of it. Most of the time, this doesn't matter much: photons bouncing off of "normal"-sized objects don't do much to the object. When we're talking about atomic or subatomic levels, however, the relative sizes between the thing we're trying to observe and the particle we're using to "see" it are much closer. As such, "bouncing" anything off of them knocks them around, similar to pool balls on a table.

The result, as Werner Heisenberg determined, is that we can only take one measurement - one observation - of a particle at any point, because as soon as you take one (usually position or velocity), you change the particle's other properties. This has been formally labelled "Heisenberg's Uncertainty Principle", and it forever broke down the myth of the "impartial observer" - someone (usually a scientist) who could sit back and "just watch" without interfering with a process.

Heisenberg's was the first blow to the notion of the independent observer, but it wasn't the last. Quantum theory, as it has developed, has focused more and more on particles themselves existing in what we identify today as some kind of probability wave. In short, if you shoot an electron out of a tube pointed in a given direction, you can have a general idea about where the electron may be at any given moment but you can't be sure. That's not surprising in itself for most people; the part that is hard to understand is that for observed results to be correct, saying the electron is actually anywhere doesn't work. The uncertainty in its location isn't just a factor of us not knowing - it seems to be intrinsic to the particle itself.

Here's the classical experiment. We have an electron gun pointed at a wall with two slits in it; logically, the electron can only travel through one or the other slit. However, if we place a piece of electron-sensitive paper on the other side of the slits to see where the electron ended up, we get something odd: the pattern that results can only occur if the electron takes both possible paths, not just one or the other. This is true even if the electrons are fired one at a time with enough time for the first to hit before the second is released.

Let me restate that, because it's possible to miss the implication: in order to account for what we actually see happen, we have to assume that part of the electron - something indivisible under normal circumstances - passes through each slit. The notion that we can't know its position isn't just a factor of our not having the precision to determine it: according to results, the particle has no definite position until it reaches the paper and, it seems, has to take all potential paths to get there. Our best understanding of this is that it exists only as a probability, and that somehow those potential probabilities are what interact and cause the observed pattern.

It gets weirder: if we set up a system to watch the electron as it passes through the slits so that we can observe which one it goes through, the odd pattern requiring the probability explanation disappears; the result is explanable with classical physics models. So, it appears that the electron behaves like a single particle once we observe it, but until we do, it behaves like a set of probabilities. Furthermore, this is true not just of electrons but of every particle the process has been tested with: as near as we can tell, all matter exists probabilistally until observed and classically after observation.

Now, in all likelihood, what's occurring here is probably not and "actual" change in the nature of the particle but some kind of observational distortion similar to Heisenberg's Uncertainty; that being said, scientists have tried to disprove this and other weird aspects about quantum theory for years to no avail. Philosophers and religious types, of course, have taken this "elevation of the observer" to dizzying heights [pardon the pun] in an attempt to "scientifically" validate free will, all manners of Gods, or whatever unprovable phenomenon they prefer. The use of probability as a model for understanding quantum theory also led to Einstein's famous admonition that God "does not play dice with the universe"; unfortunate for the good professor, observational evidence seems to disagree.

Even if we discount the philosophical extensions, the increased awareness of observation and its participation in the process can be very useful. In "real world" terms, there are far more practical impacts to observation than simply photons kareening around like billiard balls: namely, the fact that everything we observe not only has to pass through intermediary objects and multiple analog processing systems, but that it then has to be interpreted by us. Bringing into focus how our biology and psychology can affect our observations (and, thus, our interactions with and even thoughts about everything around us) is the focus of general semantics, but that's for another day.

For now, just try to keep in mind that you aren't really reacting to the world - you're reacting to reactions to the world, and even then through a layer of interpretation.



Written Chinese is a pictographic language: the symbols are representational, with each depicting a full word or concept. For example, the word for "island" was, originally, a picture of a mountain in the water where birds land; the current version has been "blurred" by the use of brush instead of stylus, but you can still make out some of the concept.

What's important here is that, with pictures, you can't really denote tense - the desgination of whether something already happened, is happening, or will happen in the future. As such, written - and even spoken - Chinese has no real tense for any verb. If you want to say you went to the doctor yesterday, you quite literally say, "I go to the doctor yesterday."

Chinese is of course not the only language that has this issue - and English has plenty of problems on its own. While these differences go a long ways towards explaining the odd syntax foreigners use in non-native languages, there are other implications.

Scientific studies have shown that, if your language doesn't have a word or concept for something, it is much more difficult - if not downright impossible - for you to think in terms of that thing. Colors and musical scales are easy examples: anyone used to "western" music will have a very hard time differentiating all the notes in Indian music. People joke that Innuit have 50 words for "snow", but what they really have is 50 different things they can identify which, to us, are all simply "snow": we lack an awareness of differentiation that they can make.

Most people I know look at a block of code and simply say, "Oh, it's a program." However, I differentiate between: markup, interpreted, and complied code; imperative, procedural, object-oriented, and functional languages; etc. I have words for all of these, and thus I also have concepts behind the words.

And, having the word and the concept influences other aspects. Philosophies in China (and similar Asian culturs) tend towards timelessness, reincarnation, and "balance" between action and non-action - all of which makes sense when we consider that their language doesn't easily differentiate between past, present, and future. If I can't describe the differences between tenses, won't my phrasing - and eventually my thought pattern - reflect that kind of timelessness?

I've heard one definition of "genius" which states it as the ability to conceive of something that one doesn't have the language to describe. Many of us have invented words at times, trying to differentiate something that heretofor hasn't been differentiated. Sometimes it's a matter of nuance, and sometimes it's simply a concept that doesn't exist in our native language. A recent addition to the English language (beyond mere technical or "slang" terms) is the concept of a "meme", defined by Richard Dawkins.

This is probably the best argument for studying multiple languages - and not just ones with a common root. If you speak English, learning German - or even a romance language - doesn't give you nearly as much as learning Greek, Chinese, or Russian. At the same time, English has become the international language not just because of the power of English-speaking countries like the US but because of its amazing breadth: English has adopted/stolen/incorporated more words and concepts from other languages than any other language in existence. It also has more tenses than any other, and so one can make more precise statements in English than in any other language - a major benefit in technical and scientific fields.

And, yet, any language still has its limits. My favorite example of a concept which does not exist in English is that of the Greek arete: it's been loosely translated as "virtue" or "purpose", but "essence of being" would also be close. None are exact, though; it's a concept which only exists in abstract association in English, and yet was so intrinsic to Greek culture that it is the reason for the Olympics.

The important factor to remember is this: even the language you speak can frame the concepts of which you can think. It's important to try to work beyond linguistics to more abstract thoughts that cannot be easily converted into language. Who knows - you might even get to make up a new word.


Just the facts

We live in a world that follows rules. We don't know what most of those rules are; some of them, we understand a bit and can approximate to decent levels. Not knowing the rules, however, doesn't change the fact that the rules exist and are pretty consistent.

The more you know about the rules and about the world around you, the better off you'll be in trying to maneuver through life. To this end, the most important thing in life is the truth. If you don't have facts and real information, any decision you make is automatically flawed. Deteriming what is real - what is truth - is the most important act you will ever undertake.

What's more, you'll have to make that determination every moment of every day of your life. As I said, we only know approximations of some of the rules; the rest are either completely unknown or guesses at best. Sometimes we don't even know that a rule exists. So, every time you hear a statement or learn a fact, you need to be able to make a determination on the truth of that statement or fact.

The most obvious way to verify a statement is to see if it's consistent with what you can observe in the world around you. If someone tells you the sky is pink but you look up and see it's blue, you've just falsified that information. If someone tells you that "doing x will cause y", and you do "x" repeatedly but "y" doesn't happen, you've falsified that statement; if "y" *does* happen, you know that the statement has truth in it.

While it would be great to say that something is either true or false, we don't have a way to do so most of the time. The best we can sometimes do is approximations - deteriming whether something is likely true or likely false rather than absolutely true or false. Some statements are easily proven true but hard to falsify; for others, the reverse is true and it's easy to falisfy but hard to prove. There are also statements which can't be proven either way - sometimes because we don't have the ability right now, sometimes because we'll never have the ability.

The most important thing in any life is truth, and the most important skill you can hone is learning how to identify level of truth in a statement. Observable reality is the only measuring stick of value. If something doesn't match up with observable reality, it simply isn't true.


The map is not the territory.

Most people have no clue what that phrase means, but it's very important.

In a park near you, there is likely a tree of some sort. It doesn't matter if it's tall or short, evergreen or perennial, flowering or not. The tree exists.

You have an image of the tree, a concept of it, a description. You can reference it in conversations with others, or in poetry if the fancy takes you, because of this. But, and here is the tricky part, the concept you have of the tree is not the same thing as the tree itself.

The tree exists as a thing, whether you think of it or not, whether you name it or not, whether you comprehend it or not. The thing itself, the existence that is true even if there was nothing to observe it, is called a nuomenon. Your idea of the thing, your descriptions and memories and such, are called the phenomenon (which is "the thing as perceived"). The two are very distinct.

A single nuomenon can have multiple phenomena - in fact, there's at least one for every observer. Each person, animal, insect, or even bacteria experiences the tree separately and thus has a different, unique way of describing it. These phenomena can be related, shared between people (or other creatures) with words or sounds or scents.

The thing itself, though - the nuomenon - is singular. It exists independent of (and in fact outside) observation, and it cannot be related or experienced. In a sense, the phenomenon is the map of your neighborhood, while the nuomenon is the actual neighborhood itself. The map is useful, but it can't in any way represent the full reality of the neighborhood: the smells, the sights, the friendliness or animosity of neighbors, etc.

The point is, the two are separate. The description or experience of something, while useful, is not the thing itself. The map is not the territory.

The problem arises when people begin to think that the map is the territory: when they start to mentally think of their description of something as being equivalent to the thing itself. One very common but unfortunate example of this is stereotyping: someone recognizes a prominant characteristic of a group of people and then uses that characteristic (and usually whatever negative traits they associate with it) as the full description of anyone in the group. Individuals are, obviously, individuals, and usually differ drastically in many ways, but to the stereotyper, their description (the map) has become synonymous with the individual (the territory).

Other examples:
  •  In organizations, the tendancy over time is to being to treat the company like an org chart or process description with people merely filling in roles. This generally causes tension and discord, since organizations are living, fluid, dymanic systems.
  • In science, many scientists who have worked with certain theories over time begin to think like the theory is reality, rather than just being a description of reality; this often leads to drama when data arises that doesn't fit the theory.
  • In schools, grades have become the reason for the classes, rather than a method for demonstrating understanding of the content of the classes. A high test score is not equivalent to comprehension of the content.

Sometimes, people try to confuse map and territory deliberately. A good example of this is the "Tea Party", which is trying to get people to see the entire party as only one aspect of the movement (the anti-government portion) and completely miss the full spectrum of what the movement is actually doing/vowing to do (to be fair, most political parties do this).

Maps should be dynamic: they change as new information becomes available, just like Google updating their satellite photos. Too often, mistaking the map for the territory leads to undue dependency or emotional ties to a specific map. That makes it harder (if not impossible) for the individual to adapt as reality changes, and makes any decisions made based on the faulty map automatically suspect.

If you ever find yourself frustrated that something isn't behaving "as it should", this is almost always a sign that your map is faulty and needs to be adjusted: either your understanding is flawed or your information is incomplete (or both). The nuomenon is perfect (in the sense that it is entirely self-consistent), so any inconsistency has to be in the phenomenon (your understanding/experience of it).

Ignoring the realities of the territory and simply assuming the map is correct can lead you to walk off of cliffs, metaphorical or otherwise.



"It’s like doing a jigsaw puzzle where you can change the shape of the pieces as well as their positions."

This is one man's description of what programming is like from the perspective of the programmer. From a comprehensibility standpoint, I think it's pretty good.

Programming, as a function, is something that mystifies the vast majority of the population; as a programmer, I think it's safe to say that even the majority of the people paid to program have no real idea what they're doing on the conceptual level. If you ask a bunch of programmers "what do programmers do?", you'll likely hear a range of replies from the uber-technical ("transliterate a requirement") to the basic ("implement a process") - and that's just on the actual work performed. If you ask a bunch of programmers *how* they do what they do, you likely won't get any really substantive answer at all.

It "should" be simple, really. All programming languages consist of a basic series of operations and relationships; such series are notably finite and often fairly short, when you actually look at the principles - offhand, I'd say there are less than 200 specific relators covering all programming languages (though syntax often varies wildly), and probably less than 20 if we are willing to use abstraction. All we're really doing when we program is combining various objects/concepts/"things" using these relators.

Of course, "all" a painter does is put pigment on canvas, and pianists only have 72 or so notes with which to play. The complexities of painting and composition come not from the tools or options but in how those tools and options are utilized. There's a creative step, an inuitive moment or leap that takes place. This leap generally takes things we understand or comprehend and presents them in ways we do not expect and is usually founded in describing the relationship between concepts that heretofor have not been understood as related. The photographer captures profound emotion in a simple flower; the painter portrays the subject from a prespective never before seen; the musician writes a motif that recalls a spring rain.

The fundamental principle here is an understanding - albeit usually subconsciously - of relationships. To quote Lewis Carroll, why is a raven like a writing desk? The first act of creation is in establishing the existence of the unknown - the fact that something is waiting to be created. Usually, this understanding then provides the context for creation - in essence, understanding what is missing gives us the first step in understanding what it is we want to create. We then have to frame that desire in the context of our medium - the semanitcs of creation are different for photography, music, painting and poetry.

We can formalize these steps as lack (or problem), desire, and semantics. What we usually think of as the "creative step" in the series is "desire": the conceptualization of what it is we're trying to achieve in some kind of workable or formal structure. This is the "hard part", as it requires both clarification of the problem as well as realizing the limits of the semantics even while we work separately from either.

And this is the step in programming that most people don't understand. Semantics, in programming, is merely the syntax of the language - something even text editors can manage, so obviously not a major issue intellectually. The "problem" is usually input from a client or customer, though oftentimes such individuals need help clarifying exactly what it is that is wrong; still, this is a piece that is easily managed through standard processes. It is the clarification and implementation of desire that requires abstraction and the mental jigsaw puzzle: figuring out how to fulfill the needs presented in the desire under the limitations of the semantics, when there are quite literally infinite potential solutions but only a few practical ones.

Programming is, for all its technical requirements, a creative act not terribly different from any other artistic pasttime. Just like other arts, the moment of inspiration in programming cannot be laid out in a procedure or taught in a classroom. One can teach someone to use a camera, but one cannot teach someone to take inspiring photographs; one can teach someone English, but one cannot teach them to write beautiful poetry. One can teach programming languages, syntax, and grammar, but one cannot teach effective programming.

Obviously, the better a person is a coder (or photographer, or linguist, or...), the wider the range of tools and options that are available to that person while programming (or photographing, or writing, or...) However, as artforms, all these rely on moments of inspiration. One can encourage the mindset needed to have such moments, and train oneself (or others) in the sorts of abstract daydreaming that forms the basis for them, but one cannot explicitly teach inspiration itself. It comes naturally or not at all.


Sometimes other people say it best

I am the outcome of a trillion coalescing possibilities...
That's a quote from PZ Myers; go read the whole thing.


Apples and protons

So here's the deal.  This apple, the one we can't decide whether it's real or not, isn't just an apple.  It's a proton - or electron, or neutron, or really any particle of matter in the universe.

Most people imagine particles as little tiny marbles  - mainly because this is how they're presented in chemistry and physics classes.  That's the old model, though, and even the "new" model - developed with the rise of quantum physics - is becoming more and more suspect.

You see, just like our first apple, particles don't really exist.  They're not physically "there" with any exact properties we can determine.  Everything we know about them we know indirectly by how they interact with other matter (of course), but since we don't know that the other matter exists either, we're in a bit of a conundrum.

We do know that the idea of little marbles doesn't work for a variety of reasons.  Again, that's not new.  One basic flaw is with the electron, which doesn't exist like a moon orbiting a planet but more like a shell of potentiation around the nucleus (and no, there's not really a simpler way of saying it).  What we didn't know then that is rapidly being suspected now is that the nucleus is the same: a mass of potential rather than bunch of marbles stuck together.  When we "knock a neutron out", it's more like the properties of a neutron being separated out of the miasma rather than a billiard-ball action.

It gets "worse", though.  Even a separate, distinct free-floating particle like a neutron isn't "whole" - it's made up of smaller particles, or seems to be.  However, these particles - such as quarks - don't seem to be able to exist by themselves for any practical length of time.  So, whether they really exist independently or are merely the "shards" of an "exploding" particle, we don't know.

And then we get into the matter/energy conundrum: how can two things with no common physicality be interchangeable?  The answer, of course, is that they have to have some common physicality - some mechanism or particle or something - that is matter when grouped like this but energy when grouped like that.  Such is part of the search for the mystical Theory of Everything, which is looking to be more and more mythical as time goes on.

So, we don't know what anything is, we can't fundamentally describe anything, and we're not even sure we'll ever be able to.  Yet, matter exists (or seems to).  How?

This is our second apple - the average, the concept, or in physics terms, the superposition of the combined wave functions.  Detectable matter, to the best of our perception, only exists as an average - even on subatomic scales.  We can categorize things loosely, describe fuzzy margins with inherent leeway, but we can never state exactly what something is or isn't.  When we look for something, it seems to be there, but if we study its effects and and try to pinpoint it, we can't be sure.

Luckily, "fuzzy margins" are good enough to a lot of things - in fact for pretty much any applied science, such as engineering or nuclear physics.  However, the fuzziness places an accuracy limit on what we can know at this point.  Unless we can find some more fundamental principle that can eliminate the fuzziness, we're rapidly coming to a wall in our understanding of basic existence.

So, the next time you sit in a chair, try to understand that the only reason you don't fall to the floor is that the average of your existence can interact with the average of its existence.  And the next time you eat an apple, try to picture it as a superposition of potentials, not just a piece of ripe fruit.


Every apple is a real apple.

Take every apple that ever existed, whether real or imaginary. Combine them all into a single structure - not exactly by averaging, but by overlaying the different pieces of information together to get a cohesive whole. The result would be something that, basically, averages out to an apple in theory but doesn't necessarily resemble a physical apple.

That's okay, though, because we're not after a physical apple. We're after a potential apple - in essence, a kind of mathematical function that defines the limits within which "apple" exists. The limits are fuzzy, just like the edges of our structure: some characteristics don't start and stop so much as fade, and fractal math states that such edges must be infinite in gradiation. In linguistic sense, you're left with a bunch of traits that are "apple", a bunch that are "apple-ish", and some that "tend to be apple" at this end but gradually become less apple-y at the other.

Now, let's use that structure - that equation or description, however you want to think of it - as our definition of "apple". Suddenly, every apple is a real apple, because we're not looking for specifics so much as trends or patterns.

This is hard for a lot of people to grasp: we've traded something hard and explicit for something vague, but in doing so end up with a better definition and, in actual fact, a better understanding of that to which we're referring. Whereas before nothing could be said to be an apple (and thus the definition useless), we can now define with reasonable precision what is or isn't an apple and use that data for further explorations (such as defining what is or isn't an apple pie).


There's no such thing as a real apple.

Most of us had those silly kid's books with a picture for every letter of the alphabet; almost always, "A" is represented by "apple" - generally a picture of a bright red shiny apple, sometimes including a happy-faced worm poking out of it. That picture - or some other like it - became the foundation of the concept of "apple" we associate with the word "apple" for the rest of our lives. Over the years, we've added to it, modified it slightly. We've encountered real apples and other pictures, and so the mental image gets a little distorted from that original. However, when we think "apple", we see the picture.

The problem is, it doesn't exist. No apple ever looked like that. Even if your mental image is of the last physical apple you ate, your image is incomplete as well as tainted by other memories.

Simply put, the image in your head - the concept of a "real apple" - doesn't exist, and any real apple that does exist won't exactly match the image in your head. So, there's no such thing as a real apple - just the approximations of "apple" to varying degrees.



It's impossible to know everything. That shouldn't be surprising to most people.

However, the next statement will probably flip out a few people: it's impossible to know not just everything but anything.

Well, for certain values of "know", anyway.

The fully qualified answer is this: it is impossible, within the confines of the universe, to be 100% certain that any knowledge or perceived knowledge is 100% representational of reality. There are countless - in fact, likely near-infinite - examples of this in history, where various folks (many quite brilliant) stumbled upon something and said, "Ah ha! At last we know for sure!" only to have some piece of data float on by later that doesn't fit their theory, leading to a new theory and a new "Ah ha! At last..."

One might argue that the single biggest obstacle to scientific advancement is the certainty that one is correct. That being said, one could also say that the single biggest boon to scientific advancement is the drive to legitimately prove it - which always, eventually, fails.

Now, there are a few arguments that challenge this, usually nuanced.

In the first: "But, our laws and theories and science have led us to do amazing things! They have to be true!" No, they don't. They just have to be consistent enough with reality within allowable tolerances. An excellent example of this is Aristotalean mechanics: the fundamental principles are flat-out wrong, as proven by thousands of years of study. However, they're "wrong" by details that don't arise in practical use, especially for Ancient Greek-level technology. So, while "wrong", they're decent enough approximations that amazing things could be done using them. Modern science is no different. Our tolerances in many situations are much smaller, so our theories tend to be consistent enough to a very low level of results. However, that doesn't mean the logic or reasoning behind them is accurate or that some random bit of data won't fall into our laps in the next few years to which they are horribly inconsistent. Just take a look at things like dark matter and dark energy if you want to see examples.

The second argument is far more layman: "Okay, so that kind of technical stuff may not be 'known', but real-world stuff is. Like, I know I'm standing here talking to you." No, you don't. This piece of the puzzle can get a little more philosophical, but it's true on the practical level as well: you can never state how something is, only how it is perceived. The difference is far from trifling. In scientific terms, we have Heisenberg's Uncertainty Principle, which states that the act of observing something inherently interferes with the observation. Combine that with limitations on observational speed, and it is easy to deduce that any statement we could make, technically, about something has a decent chance of being false by the time we make it.

There's another sort of "fuzziness" that gets introduced, especially with real-world examples. For one: no human is a thing; humans are collections or concepts to begin with. Each person is made up of multiple bits - billions or more, depending on the level we wish to examine - but even from a cellular level, the body has pieces constantly growing, changing, and dying. Matter is taken in, matter is excreted, and as a result the body itself changes. Even without that, individuals atoms or subatomic particles disappear and reappear according to quantum flux principles. You as an individual and everything around you that you recognize only exists as an average state of multiple bits that are constantly changing. Again, our level of "allowable tolerances" comes into play: nothing is known except to a certain degree of approximation, and that includes existence.

The final support for our lack of knowledge is both more technical and more philosophical: everything in the universe seems to be affected by everything else. And by "everything", I mean everything. Every electron, every microvolume of space, every spark of energy affects and is affected by everything else. It is, therefore, impossible to know everything about, for example, a proton without knowing how it is being affected by everything around it - which would require knowing everything about everything. Now, on a local level, the larger part of the universe affects everything evenly so that only local variations really "matter" for our purposes: for example, we don't need to know the locations of all the planets and their velocities to be able to determine (within practical tolerances) how fast a cannon ball will fall from a tower, because us, the tower, the ground, and the cannon ball are all being affected about the same way by the planets. However, that's just another level of approximation - if we wanted to know how fast the cannonball was travelling through space, we'd have to know everything.

It's ironic that, in a very real sense, one reason we've managed to accomplish so much is that we're egotistical enough to think that the levels of observation we can make are the only ones that matter.

If you want to try to work these ideas into your real life, you can answer "probably" instead of "yes" to questions. You can also try to work in E-Prime, though it might take people a while to notice the difference.

And, finally, I'll just say that while it's impossible to know everything, or even anything, that's far from a reason for us to stop trying. As the phrase goes, "A man's reach should exceed his grasp/else what's a heaven for?"


Gravity got you down?

(Oh yeah, this one's going to be fun...)

Well fret no more. I'm here to tell you that gravity doesn't exist!

There, feel better? No? Still tethered to your planet, at the bottom of a gravity well hundreds of miles deep (unless I have readers on the space station)?

Okay, then, let's back up a little. Gravity exists, but no one knows what it is or why.  If you go by the old rule of "if you can't define it's limits, it doesn't exist", we could argue that gravity doesn't.  (Hrm, still not floating...)

Gravity's a bit like time in that respect. We have it, we can measure it, we can even use it, but we have no way of describing it fundamentally. We have no real notion of what is taking place, other than a grade-school concept of "the attraction between two bodies".

First, let's give what we *do* know: gravity seems to be an attractive force. This doesn't mean it's pretty: I mean that it brings two objects together. The amount of gravity any one object "inflicts" on any other can be calculated based on their relative masses and separation distances. We also know that gravity is indistinguishable from acceleration.

That's pretty much it. We don't know how it works: most interactions between objects require a medium of interaction, and despite searches for the almighty graviton, no such particle has ever been detected. Einstein got around this by stating that the "medium" was spacetime itself, and while it's a neat little answer, it's a bit too neat - and too black-boxy - for most modern physicists. We also haven't really been able to determine whether gravity operates within discreet units like other forces; this is important, because discretion would imply a quantum nature whereas a lack of discretion would seem to imply - well, something else. We also know that gravity is related to mass, but we don't know *why* it's related to mass.

We also have to reconsider the whole "attraction between two bodies" concept, not because it's wrong but because it's a simple statement that really doesn't make its full impact apparent. When we say "between two bodies", we really mean "between two things that exist" - again, since gravity is related to mass, and mass is interchangeable with energy, and everything in the universe has mass and/or energy, everything inflicts some gravity. Also, everything is affected by gravity. So, our little "attraction between two bodies" really means "attraction between everything in the universe and everything in the universe". That's right, gravity is trying to pull together everything in the universe; most of the time the force is too small to be measured or effective, but it's always there (and yes, while this means you're technically being drawn however slightly towards that bag of potato chips, it doesn't explain why you had to eat the whole bag in one sitting).

There is also, noteably, no such thing as "anti-gravity", which is one more trait gravity shares with time: whereas most forces or actions have an opposition, gravity (like time) does not. Even antiparticles - which are just mass like anything else - are attracted to each other and to standard particles. We've never encountered anything that has a reverse gravitational effect - basically, a repulsion instead of an attraction - that isn't based on some other force.

That's not to say a lot of people aren't trying to solve the unknowns. Quantum gravity, quantum loop gravity, holographic theory - there are a lot of approaches to answering the big question of how gravity works. There are even a few that state that gravity, like centrifugal force, "doesn't really exist" and is instead just something else being (mis)interpreted as a separate force - usually acceleration (again!) or entropy.

We'll have to see where it all goes. Since much of our future scientific and cultural expression will rely on overcoming gravity efficiently (at least locally), I expect this area of study to be one of the biggest for the next 50-100 years or until a major paradigm shift takes place.

Until then, it helps to realize that even something as familiar as gravity isn't all that familiar and remember all the good things that gravity permits - like a nice day in the waves at the beach.


Go West

The sun doesn't rise in the east.

Think about that.  Really wrap your mind around the concept.

Of course the sun doesn't rise in the east.  It's Earth that rotates and causes the illusion.  We all know this.  Yet, sunrises and sunsets are so ingrained in our social consciousness that we have trouble breaking free of them.

Civilization shifts westward.  While not an absolute trend, it's definitely a trend that is visible in history.  A lot of thought has been given to why this trend exists, but the most reasonable and testable hypothesis has to do with sunsets.

As the day goes on, the sun appears to sink into the west.  Like the sunrise, sunsets are built into our social culture and tend to represent the end of things: the end of the day, of a life, the passing on to something we cannot follow, etc.

So, it's only natural that those people who are seeking to break molds or to escape from the limits of their lives will chase the sun and try to reach that sunset.  That means moving westward.

So, this migration started in eastern Asia aeons ago and is "currently" at the West Coast of the US.  That has many people convinced that Japan/China/eastern Asia are next up for the migration, completing the loop and perhaps starting another.  But, if the true motivation is "chasing the sun", as it were - reaching beyond the limits of the known/current to something else - then that makes little sense.  Indeed, the supposed rise of Japan in the late 80's collapsed, and even China's "supremacy" of global markets at the moment is driven largely by unsustainable fiscal policy on the part of China's government.  India may be on the brink of tremendous famine thanks to a new wheat disease, and no other country in the region is set for major growth.

So, if we can't keep going west, but we have to go somewhere... the only place to go is out.

Time to *really* start chasing the sun, and get beyond sunrises and sunsets altogether.


Astronomical fun facts

The sun doesn't rise in the east.  It actually doesn't rise at all; the earth rotates.  It can be difficult to remember this when watching it from the surface of the Earth, but doing so can really help change one's focus.

The space station is about 200 miles above Earth's surface.  If you could drive vertically as easily as you could drive horizontally, it'd take you about 3 hours (at normal 65 mph) to get there.  If you then decided to drive off to the moon, pack a few snacks: it'll take you another 6 months.

99.86% of the mass of the solar system is in the sun.  Jupiter is less than 0.1% of the mass of the solar system, yet it's 2.5x the mass of all the other planets combined.  That means that there's enough non-planetary mass in the solar system to make up 21 Earths.  Might get a little crowded around here.

The mass of the solar system is about 1.989x10^30 kilograms.  There are 6.022 protons and/or neutrons in 1 gram, or about 6,022 in a kilogram.  That means, in the solar system, there are approximately 12,000,000,000,000,000,000,000,000,000,000,000 protons or neutrons (electron mass is mostly insignificant).  This is off slightly, since some of that mass is converted to energy.

Contrary to popular belief, the Moon doesn't orbit the Earth: they both technically orbit a common gravitational "locus", similar to the way two people holding hands and spinning orbit some point between them.  Because the Earth has significantly more mass, the locus is "within" the Earth - a few thousand miles off-center, about 1000 miles below the surface - so, from a distance, Earth looks like it wobbles around its axis.  Also, the Earth - and all the planets - don't actually orbit the sun for the same reasons.  In fact, because of the distances involved and the density of the sun, the locus of Jupiter (also called the center of mass) is generally 30,000 miles above the surface of the sun, though the "wobble" of the sun changes dramatically depending on planetary alignment.

The milky way (our galaxy) is about 100,000 lightyears across and 1,000 lightyears thick.  This tells us that the volume of space it occupies is roughly 7.8 trillion cubic lightyears.  If we assume the high-end estimate of 400 billion stars, that means there are an average of 1 star per 19 cubic lightyears.  This isn't really accurate, since stellar density is far higher in the center of the galaxy than at the edges; out by us, the actual density is closer to 2300 cubic lightyears per star.

If we assume that radio waves have been broadcasting from Earth in some form since around 1880, then those signals fill a volume of space of approximately 9.2 million cubic lightyears.  That means our radio broadcasts have reached less than 4,000 stars and just over 0.0001% of the galaxy.  Needless to say, the odds that any of the planets near us could support life, much less *do* support life that could notice our broadcasts, is pretty low, but we've only just scratched the surface of the galaxy as a whole.


Right to Life

So, I'm going to approach a very delicate subject from an angle that I rarely hear expressed coherently.

The abortion debate is very contentious, with members on both sides railing about rights to life and quality of living and such.  A quick question - really, the question - is whether or not the foetus has the same rights as the carrier.

I'm going to side-step the moral questions for a moment and merely tackle this specific question.

First off, we have to understand what we mean when we refer to the rights of the carrier.  As I'm in the U.S., my viewpoint will, be skewed slightly towards that political climate, but that should have little impact eventually.

The ethology of the U.S. states that various rights - life, liberty, pursuit of happiness, etc. - are imbued in all humans; this isn't stated in the Constitution, mind you, but it's often used as a governing principle when interpreting the meaning of the Constitution.  So, from a legal aspect, we can state that "humanity" is one required principle for a certain level of rights.

Obviously, there are rights imbued to citizens, who are classified as natural or naturalized - both of which are specific processes or conditions.  The easiest way to qualify as a citizen is to be born in the U.S.; since the foetus obviously isn't born yet and, furthermore, likely hasn't gone through a naturalization process, it cannot be a citizen.

Furthermore, there are certain "rights" viewed as existing to "all living things", generally enforced or seen as dictated a reduction of animal cruelty and such.

So, that gives us a framework for the "rights" aspect of a "thing": if it's alive, it has a certain level of "rights" that aren't necessarily universally enforced; if it's human it has other "rights"; and if it's a citizen, it has even more.

The carrier obviously qualified for all of the above without question.  So, we focus on the foetus.

The foetus is, as stated, not a citizen since it has never been born and has never participated in naturalization.  So, the rights of US citizens to not apply.

Is the foetus alive?  In some sense, yes.  However, as stated, this does not guarantee a granting of rights: we kill living things all the time, and very rarely is an ethical question raised.  Usually such ethics are only called in when the living thing is capable of demonstrating a sensation of pain; it's questionable at what stage a foetus is able to do so, though observation seems to indicate that such sensation is possible only at around the start of the third trimester.

However, again, that isn't necessarily sufficient justification for the absolute prevention of killing; we still kill/harm living things that can feel pain, often for seemingly "trivial" reasons.

The third criteria is the crucial matter and the hardest to gauge: is the foetus human?  To do so, we have to define what it means to be "human", something that is very difficult to do.

Inarguably, for most of its life, a foetus has little resemblance to a born human being beyond the biochemical - even once tissues start differentiating, the physical anatomy takes weeks to become recognizable.  A foetus that is delivered before the end of the second trimester is not likely to live: survival rates, even with the best medical care, are barely 50% at 24 weeks (two weeks before the end of the second trimester).  (This fact, combined with the development of the ability to feel pain, is generally why there is the current "third trimester" limit on elective abortions.)

Many people, mainly the religious, argue that a foetus is human from the moment it is conceived; the justification for this appears entirely religious, but I'll try to tackle it from an evidence-based perspective.

Merely having the genetic material of a human does not make something individually "human": cancer cells, for example, have all the genetic requirements for "human" but are killed all the time.  Many even have slightly different genomes (usually through mutation) and thus might actually qualify as a separate life form from the host, but this doesn't stop us from killing said cells.  Clearly, society as a whole uses some other criteria for judging the "humanity" of a being.  Formalizing that criteria would go a long ways towards clearing up the abortion issue.

As a final consideration, it can be argued that, until the foetus is independently viable (again, around the start of the 3rd trimester), it cannot be truly considered an "independent life" and is instead either part of the carrier (as an organ or other growth) or a parasite.  Carriers regularly excise either from their bodies, sometimes under medical requirement and sometimes electively.

If the foetus is not independent, it has no "rights" as an entity.  If it is independent but isn't human, it has no rights normally associated with humanity.  Even if it is human, it has no rights associated with citizenship.

It's clear that, if society could formulate answers to the first two questions (those of independence and humanity), the issue of abortion could be solved ethically once and for all.  From the known evidence, I currently think the current 3rd-trimester limit is the most scientifically justified ethic, but it is society as a whole that needs to resolve the issue on the basis of all evidence.


In the beginning

... there was nothing, because there is no beginning.

As one of my favorite authors, Terry Pratchett, has written, stories don't start - they weave.  What seems like a beginning is just the continuation of something else.  Tom Stoppard once wrote, "We do on stage what most people do off, which is a kind of integrity if you look at every exit as an entrance somewhere else."

And there's the rub (to quote another bard): nothing is ever truly first.  There's always something that came before.  Even all the way back to that seminal event (that may or may not have occurred) called the Big Bang, and even prior to that - because for the singularity to "expand", it had to exist first, which means something was there before the Big Bang.

This is the trap of thinking in four dimensions - of thinking in terms of "time".  Time, you see, isn't a location, it's a direction.  It's flow - as in, time *is* flow and nothing else.  We have no method to describe it that isn't self-referential.  We have no tools for measuring it directly.  And yet we observe its assumed effects.

We could argue that time is part of the universe we know, and that therefore time itself came into existence as part of the big bang.  However, that solves nothing; it just complicates matter.  For something to "pre-exist time" implies the existence of some kind of "supertime" (the same way that stating something as "outside our universe" implies something larger): it doesn't solve the problem, it merely shifts it up a level.

Furthermore, because of how time is - flow - once it existed it had always existed.  Wrap your mind around that one for a few seconds: if time itself comes into existence, then any notion of "past" had to come into existence also, so "as soon as" time existed, there were past, present, and future already formed.  Once time was around, it had to have always been around, because "always" is a concept of time.

Anything that happens "outside" time - without a time dimension - would, by definition, happen simultaneously/constantly/forever (there really isn't a non-temporal word that works).  That means that what existed "before" the big bang would have to co-exist with whatever exists "after" the "universe" created by the big bang finishes/collapses/ends/whatever.  In essence, not only is the big bang still happening in the "superuniverse", but it's also already over.

And yet, we see the stories weave as we always have, one second before the next.  And just as no story every really begins, no story ever really ends - it just becomes part of another story, another path to weave.


Needs and Wants

Why do we get into relationships?

It seems like it *should* be an easy question, but it's not. It's a vast whirlpool of conflicting ideas, ideals, and realities, one that can drag us down and crush us under the pressure.

I know part of the answer. Many times (most times?) we get into a relationship with someone because of something they have that we need, or something we have that they need (or sometimes both). I don't necessarily mean material goods - though that obviously happens. No, mostly, it's about psychological needs.

Like, one friend I have doesn't feel like he has any personal worth unless he's loved - and that's what he needs from his relationship, an affirmation of self-worth (if that even makes sense). Another needs a stabilizing force, and in return he provides an element of fun or pleasure. Some people need someone who is stronger, or someone to take care of, or someone to simply be there so they don't feel alone at night.

If I had to guess, I'd say 90% of all relationships fall into this category. That doesn't mean they're bad, mind you - sometimes, that kind of shared need is enough. Sometimes, it's not, and the relationship ends relatively quickly.

I have to think, though, that there's something else out there - some relationship that doesn't include these kinds of needs, where there's no psychological game to play or role to assume. Where you're with someone simply because you enjoy them, and they enjoy you. Maybe that's my own need - a need for there to not be needs - but I think that's getting a little solipsistic.

I think that's why I'm not in a relationship, and why I date so rarely. I usually state it as "I can't date anyone I have to fix", but maybe it's better stated that "I can't date anyone I have to fulfill". Oh, I can be a provider - of stability, of amusement, of strength, of weakness, even of love - but I don't want a relationship to be based on those being *necessary* for personal well-being. I want to date someone who is stable, and comfortable, and secure, and happy, with or without me.

I think I'd rather be wanted but not needed.  Others can make their own choices.