Dec 15, 2014

Did you answer the question I asked?

The last cartoon told the story of a meeting that was trying to deal with a broken schedule but the real story is about broken communication. People hear what they expect to hear. If I ask you a question and you give me an answer that sounds reasonable, I am likely to believe that 1) you understood the question I asked and, 2) you answered the question I asked. That is rarely the case, it is never the case when people are scared. The complete story goes more like this:

I was hired at a company to help them develop a new product line that required a high speed long distance data network. I had been working doing network services research for what was then known as SBC, now known as AT&T, for the last five years and was very much the right guy for the job. (Not to mention that working for the phone company made me so depressed that I had gone to a shrink for help. It was either drugs or leave SBC. I left after I found out that most of the people I worked with were on anti-depressant drugs. Working for a soul sucking evil company is bad for your health.)

After I had been working there for only three or four days I was asked to sit in on a meeting about a project that was running about three months late and that didn't look like it was going to finish any time soon. The project as planned was to take one year. They were now 15 months into the project and were starting to panic.

This was a typical "come to Jesus" meeting where the company president was asking everyone to pledge their lives to the completion of the project. They were told that the future of the company depended on the prompt completion of the project.  (In other words they were told that they either had to complete the project soon or find another job.)

After the attempt to scare his employees to death the president went around the room and asked each person to give him a completion date for their part of the project. He spent a lot of time doing this. He made them swear to the dates. Then he turned on one poor engineer who had been working for more than a year trying to solve a critical technical problem. The problem had to be solved to get the performance they were planning for. Failling to solve the problem would make the system run 50% slower. Worse than that, failing to solve the problem might mean that the process could never be made faster. That was not acceptable. They had to make it faster.

The mistake of making one highly competent person responsible for the most critical part of a project is more serious than it sounds. It puts tremendous pressure on that person. If they fail, the whole project fails. The stress can ruin a great engineer. Everyone needs to have the support of a team and feel that they can always ask for help and get help if they need it. Even the most hard core lone wolf (like me when I was younger) needs to know they can go to management and ask for help without getting put down or beat down. The worse thing that management can do is to say something like "YOU! need help? Gee, I guess you aren't as good as we thought you were." You might as well just carve their heart out and leave them dead on the side of the road. Of course, it does give the manager a perfect scape goat when the project fails.

At the end of the meeting the president turned to me and explained "I know you are not part of this project, but I want your opinion of the new schedule". "Oh shit", I thought. This is the time to either lie or lose any chance I had of making friends here. I decided to tell the truth because I wanted to work there a long time. Better to be truthful with management than to make friends. (I wound up leaving after only nine months, just ahead of the lay off that got everyone over 40.) 

The situation in the meeting was particularly touchy because I was in my late 40s and most of the people in the room were in their late 20s and early 30s. They all had graduate degrees. They had been out of school for only a few years. Most of them had degrees from schools I couldn't even dream of going to. Almost all of them were from rich families, I was not. I was 15 years older than the president of the company. The age and social class structure of the group was out of whack. That all matters more than you might think.

The project was months over due and the engineers had just committed to finishing it in four months. I was being asked to tell them if this would happen or not. I said, "no one asked any questions about how long it was going to take to test the system and integrate it into the production environment". The president looked at me oddly and so did everyone else. I went on to the president "You were asking when the system would be in production and generating revenue. But, the engineers were talking about when development would be complete". You see folks, the question they heard was "When will you be done?" but the president was asking "When will is start making money?".  The engineers all agreed with me. They were telling the president when development would be complete, not when the system would go into production. 

I went on, "Also, there is no plan for what to do if the performance problem is not solved. You need to have a plan in place in case the problem is not solved. How do you know when to punt and go with the lower performance?" The engineer responsible for solving the problem looked damned angry at that point. Tight lips, white edges around the mouth and eyes. I said to her "I am not saying you can not solve the problem. I am saying that it might not get solved in time for this system." That calmed her down a bit.The real problem was that management had never considered any of the risks involved in the project. They planned for the best possible case and never even considered what to do it something went wrong.

I went on to tell the group that I expected at least a month, mostly likely two months, of testing and performance tuning, and at least a month for integration into the production system. I went on to say that I thought it was likely to take three months after the end of development to get the system into production. All the engineers slowly nodded "yes" the president looked like he was about to blow a vein.

It took three months to get the system into production after the end of development.

I talked to the president privately. He thought they had all been lying to him. I tried to explain that they had been perfectly truthful, but they were not hearing the question he was asking. They were hearing the question they thought he was asking. And, they answered that question with perfect honesty. The president could not accept that idea. He lost trust in his development staff.

How come I could spot the problems in the meeting? I spent four years working at the help desk at my university as an undergraduate where I got a lot of experience figuring out what people were actually saying. After school various corporations put me through listening training, requirements gathering training, and interviewing training. I have been a manager and an engineers and seen the same problem in meetings I've been in before. I have seen the problem from both sides and saw the terrible results. You can sum it up as "been trained, been there, done that, have the scars to prove it".

Dec 12, 2014

I've never been a cartoon before!

Seems like some people think my writing is actually interesting. The folks at decided to make a cartoon based on me, yours truly, the grumpy programmer. You can see the whole article or just look at the pictures right here.

Talk about how to inflate my already inflated ego!

Dec 10, 2014

Why is the Next Big Thing always Ancient History?

Recently I've been seeing a lot of hype about the Internet of Things (IoT). It is clearly the next big thing. A brand new idea that is going to change all our lives for the better. No doubt it will. I've been talking about the day when the refrigerator will track its contents and the age of everything in it for so long I can't even mention the idea to my wife without getting an eye roll and a glazed look. I think she got bored with the idea sometime in the '70s.... But, really wouldn't it be great if the refrigerator and the pantry would keep track of what was in them and generate shopping lists for me? How about being able to suggest a nice dinner menu using only stuff on hand? I'm still looking forward to that!

But, IoT is the
next big thing, how could anyone have been talking about it for 30+ years? Well, it was kind of obvious that something like this would happen. After all, have you seen a picture of a robotic manufacturing plant? Do you think those robots aren't on a network? Networked "things" go back to the early days before Ethernet and Internet technology. Take a look at the history of the Internet Coffee Pot from 1991. No, that is not from before Ethernet, or Internet, but it gets the idea across. The idea of smart things has been floating around for at least 30 years, and really for a lot longer than that.

Are printers smart things? They have been networked since at least the early '80s.

Take the case of Google Goggles (Yes, I know the are called Google Glasses, but seriously... Try saying "Google Goggles" ten times fast. If you don't end up saying "goo goo" or "goo gah" you are a better man than I. Better yet, try that while doing your best imitation of a goose. Not suitable for work!) OK, clearly Google Goggles are another candidate for next big thing. Turns out that its ancestors go all the way back to Ivan Sutherland's head mounted display (Google for pictures) from 1968. The idea behind Google Goggles is at least 50 years old. 

The examples of old ideas that became the next big thing go on and on. Remember how big and important SVG (Scaled Vector Graphics) was back around the turn of the century? They were the coolest thing since 7-Eleven started selling Ripple the refrigerator case. My boss at the time, a brilliant Electrical Engineer, who was for some reason working on designing software. (EE design skills do not always transfer well to software, a function call is not a cable even if they both transfer data. Software skills do not transfer to electronics well either; Is a picofarad a real thing? Really?) Well, she was excited about SVG. She had read all about if and had been told by many people that it was the coming thing and we, for some reason I never heard, had to be on top of it. She came into my cube and told me to research SVG in depth. I asked, what do you want to know and why is it such a big deal? I told her that once the tool set settled down it was something that only artists were going to have to worry about. She not only didn't believe me. She got pretty angry.

After she settled down I made the mistake of saying that SVG is not new and it is no big deal. It was the kind of graphics I did as an undergrad and had coded up several times for different projects. SVG is nothing but 2D computer graphics with a 3x3 matrix and shading all wrapped up in a standard file format. No Big Whoop! I implemented all of that, including elliptical and spline curves on a 68000 in assembly language back in '85. It was old hat, undergraduate level stuff in the early '70s. Again, the next big thing was more than 30 years old. She was totally befuddled by my reaction. But, then one of the reasons I was hired was because of my experience in computer graphics. (Algorithms, not tools.)

How does some ancient idea or technology become the next big thing? Why does this happen again and again and again?

It happens to me because I am old enough to remember, curious enough to read, have a weird memory, and an over active imagination. Seriously, I am not normal. The thing is that this stuff is
new to most people. When someone tells you something is new, and it is new to you, why shouldn't you believe them? The obvious answer is that people pushing the next big thing are trying to get rich and will lie like a rug to get there. Been there, almost done that. Another reason is that enthusiasm is infectious. It is just fun to jump on something that seems new and exciting. But, mostly the media have little to write about so they all jump on the same new idea and hype it.

No... that is not what I mean by why does this happen. I mean, why is it that something that was dreamed up, developed, and used 50 years ago can be the Next Big Thing

The answer can be summed up, but not explained, by a quotation from Robert A. Heinlein in the book "The Door into Summer":

When railroading time comes you can railroad—but not before.”

Read the book, it is a good read. I have had several cats that actually looked for the door into summer, or dry weather, but I've never had to follow the cat to more than three doors. As sagacious as the quotation is, it is not an explanation of the effect. What the quotation is saying is that no matter when the idea was first thought up you do not get to build railroads, or anything else, until several problems are solved. Some of the problems are listed below.
  • The Technology has to exist. If you do not know how to make steel and steam engines you do not build steam engines that run on steel rails. If you can't build a wireless network you do not build wi-fi enabled coffee pots.
  • It has to be Cheap enough. You have to be able to make a profit off of your new product even when competing with existing products. Smart coffee pots have to compete with dumb coffee pots. Railroads had to compete with wagons and roads.
    My first Ethernet card cost more that $100 and I put it into a computer that cost over $1000. To build and Internet enabled coffee pot the cost of a wi-fi port + processor + RAM had to drop down to just a few dollars. Very few people will pay over a thousnad dollars for a smart Mr. Coffee.
  • It has to be Usable. If there were not already millions of network enabled smart phones in the world a network enabled coffee pot would be pretty useless. The IoT people have it easy. The user interface device is already ubiquitous and well understood by potential customers. The networks needed to make them possible already exist. The railroads had to build their user interface, the train stations. Their whole network, the railroads. As well as the telegraph network needed to schedule them.
  • It has to be Legal. It seems like the reasonable thing to do when you have a great idea is to patent it. You can license the patent and make a lot of money. Or, not. Most patents are over valued by their owners. The problem is that once something is patented you either have to pay to use it or wait twenty years for the patent to expire.That twenty year wait is a major brake on the rate of progress.
    The other legal problem is the existence of actual laws that make it illegal to do what you want to do. Look at the problem Uber is having with local taxi regulations. Not to mention new laws banning them because of the alleged actions of some of their drivers.
  • It has to pass the Giggle test. Some ideas just seem silly to people. A product idea cannot succeed if people think it is just plain stupid or impossible. If there had been no Star Trek we still might not have cell phones. To be acceptable an idea has to be acceptable in science fiction. If it can't be accepted in sci-fi it will not be accepted by the public. 
  • There must be a Demand for the product. It doesn't matter if your product meets all the other requirements people have to want what you are selling. As my MBA ex-brother-in-law once told me "Toilet paper is the Perfect Product." Everyone one wants it and better yet, they only use it once. People knew they wanted toilet paper as soon as they saw it, but we went thousands of years without it.
So, to build a new product you need technology that is cheap enough, usable, legal, in demand, and passes the giggle test.

Think about Google Goggles versus Ivan Sutherland's head mounted display. In the late '60s the HMD was at the hairy edge of what was technologically possible. The displays used video tubes. The joke was that it was a great idea as long as you don't mind having 20,000 volts across your temples. The displays in Google Goggles were not even science fiction in 1968. Comparing the computer in Google Goggles with the computer used by the heads up display is, well impossible. Moore's law says that our ability to put transistors on a chip has increased by more than a million fold since the heads up display was built. Head position was determined using a huge mechanical gizmo call the Sword of Damocles. In Google Goggles head position is tracked using micro machine accelerometers and gyros. Technology that wasn't even possible in the middle sixties. The fact is that while Ivan Sutherland's head mounted display demonstrated the wonderful potential of something like Google Goggles, at the time it was built it did not, and could not, meet any of the requirements to be widely used. It wasn't time to railroad yet. Now it is, we can build Google Goggles. 

I actually saw Ivan's HMD. I went to the University of Utah for both undergraduate and graduate school. One day I was sent to get something out of a storage room. Way in the back of the room was this weird looking pile of equipment. It looked like a prop from a '50s era science fiction movie. I got what I was sent for and then asked around to find out what that pile of stuff was. Someone recognized it and told me about it. We went back to the storage room and he explained the whole thing. Later I was able to see videos of the thing being used. It made me really think. I think too much progress is blocked by a failure of imagination. Sutherland has a lot of imagination and the ability to make it real.

SVG was a very different situation. All the technology for SVG has existed since at least sometime in the '60s. The concepts and techniques behind SVG were well known, widely taught, and used all over the world. But, it was known, taught, and used by a tiny number of people. There was no demand for an SVG standard until computer graphic technology became cheap enough for a large number of people to be using it. There was no demand until there was a need to transmit art over a network. There was no demand until there was a need to dynamically scale art for many different types and sizes of displays. Once all those factors  came into play it seemed like SVG popped up almost overnight. It had been lurking in the background and suddenly became visible to the whole world when the need for it emerged. SVG seemed like a new thing, but to old computer graphics hands it was nothing to get excited about.

The effect of the law on the next big thing is unappreciated by most techies. We tend to look only at the technology and not at the legal environment that lets us, or stops us, from building a service. I did a quick check on the web for the history of the Internet. I found lots of pages that talked about the history of the technology. One even mentions Sputnik and the cold war as driving forces behind the Internet. But, not one of them mentioned the nearly 100 years of legal theory, law making, and litigation, that created the legal environment that let people own modems, connect those modems to the telephone network, and then connect to the Internet without having to pay per minute to use the phone line.

If the law did not force AT&T to charge a fixed monthly fee for local telephone calls then using the Internet over the phone lines would have cost hundreds of dollars per day. In most countries all phone calls were billed by the minute and that policy kept the Internet from reaching far outside of the US until that changed. If the law did not force AT&T to let you own your own telephone equipment, let you connect that equipment to their network, and use that equipment to connect to the Internet it would have been impossible for most people to afford to use the Internet. No consumer friendly laws, no Internet. And, believe me, the phone companies, all of them, hate the Internet because they cannot control it.

It wasn't that long ago that the law was not on our side. That changed and we got the Internet. The big telecoms are trying, successfully so far, to change the law back to favoring them. If you like the Internet, you need to take action to protect the laws that govern the Internet. Call you representative and demand that net neutrality stay the law of the land.

The key thing to remember about the Internet is that the technology was developed and in use by the government and the military for nearly 20 years before the law was changed to let you use that technology. The Internet was old hat for some of us long before it became the next big thing. if you really want to get into something interesting, try finding out about the Top Secret patent system. Scary stuff. Technologists, and people who depend on technology, that is all of us, need to understand and get involved with the law.

Nov 6, 2014

It's the Users Stupid!

The phrase "Mechanism not Policy" was the mantra of the X Windows System developers from day one and still guides everything they do. I was involved in the X Windows System (hence forth just "X") during the late '80s and early '90s. I represented a company on the X Consortium board as well as doing ports of the X server to several graphics systems. During that time I internalized the idea of "Mechanism not Policy". But, I rewrote it as "Tools not Rules".

(To the best of my knowledge I was the first person to port X to run on a frame buffer with 24 bit color and the first to port it to a display greater than 800x600 pixels in size. That was a long time ago in terms of Moore's law.)

The basic idea is that when you are creating a tool you need to keep in mind that you are creating a tool that people will use to solve problems that are important to them. The thing is, you do not, and can not. know what those problems are. People will find surprising uses you never thought of and use the tool to solve problems you could not imagine. That means that you must not, never ever, design the tool with built in limits that restrict what people can do with the tool. The corollary of this concept is that if users, especially if a large number of users, ask for a small modification to make the tool more useful to them then if it is at all possible you should make the change even if you can not see their reason for it. You should make the change even if you disagree with their reasons for asking for it. It is clearly the right way for them to solve their problem.

You are not the measure of all things, your point of view is not the one and only point of view that counts. This concept is summed up in the Platinum Rule, "Do unto others as they would have you do unto them". Compare that to the Golden Rule in which your preferences are assumed to apply to every one else in the entire Universe. A good example of the Platinum Rule is offering someone a drink. I like beer, wine, and a good whiskey. So, by the Golden Rule I should offer those things to people I meet. I have a friend who is an alcoholic, he has been sober for more than 30 years. He does not want me to offer him a drink, he does not want me to drink around him, but by the Golden Rule I should offer him a drink because I like to drink. By the Platinum rule I should never offer him a drink, I should not drink around him, and I should intervene if I see him reaching for something that contains alcohol. Which rule do you think he wants me to follow? 

Of course, the Platinum rule can not be used to make you violate your personal moral code or to extract money from you... Gee, I want people to give me money... So, follow the Platinum Rule and give till it hurts! Gee I want you to suck my,.. You must apply some reason to following any rule. 
When it comes to adding features to an open source project you do have to be reasonable about adding new features just like you do when you apply the Platinum Rule to dealing with people. People will ask for things that do not belong in the project or that will just require to much time to implement. My first open source project was a thing called the "Input Line Editor" or ILE. I had a couple of people who wanted me to help them convert it into an Ada compiler. These folks really did not see the difference between a simple editor and an Ada compiler. Oh well. I did refuse to make that modification. It would have taken me years and I did not want an ADA compiler. 
On the other hand, I got several requests for modifications to ILE and I implemented most of them because they made the tool more useful to more people. The changes even made it more useful for me. I also got a couple of major bug fixes. Both of them for bugs I didn't even know I had. Nothing works as well as peer review.  
If you do not know what Free/Libre/Open Source software is please go to the Open Source Initiative  for a good definition of Open Source Software (click on the links, you know you want to). If you do not know what is right with it, please read "The Cathedral and the Bazaar" by Eric Raymond. (Seriously, if you have never heard of it I want to know the name of the alien planet you come from!) For the rest of this piece I will just refer to Free/Libre/Open Source software as OSS because that is pretty much what everyone else calls it. I could spend a lot of time ranting about what it really is or what it should be called, but those flame wars have been fought for decades and are still going on somewhere on the 'net. If you must flame someone about all that, please flame someone else.

The underlying philosophy of OSS goes hand in glove with the concept "Tools not Rules". OSS is about freedom. It is about the freedom to use and develop tools. Tools that do not impose arbitrary rules on the people who use them. I believe in freedom. I believe that powerful tools make us more free. I believe that having tools that can not be taken away makes us even more free. OSS provides tools that can not be taken away. Tools that we are free to modify for our own specific use and free to incorporate into new tools. Not everyone who works on OSS agrees with me on any of that.

There are people in the OSS world who believe they have the right to tell you what you can and can not do with the tools they are developing even when the changes needed to make them more powerful would cost them almost nothing.

I used to teach in a local community college. I was a teacher for about 10 years. I reached the rank of Adjunct Professor. If I had been full time I would probably still be teaching. Adjuncts get all the fun and work of being a real professor but none of the pay or benefits.

One class I taught was a basic computer literacy class. Believe it or not many people just out of high school and too many people in their 30s or older need a class to learn how to do email, word processing, spread sheets, and databases. Even though I had used all those tools in business for decades I wound up learning a few things that I didn't know... For instance, did you know that in Excel you can write spread sheets with circular references between the cells? Did you know that you can use that feature to write complex iterative programs entirely in the form of a spread sheet? And, if you have the information tied to a chart the chart will animate as the spread sheet cycles? There is a great demo of this feature that runs the famous "Game of Life" inside Excel. Each cell of the spread sheet is loaded with an expression that implements the actions of a cell in "Life". It is a very handy and very powerful feature. It is very useful to teachers and many other professions.

Another very cool thing about spread sheets is that all those little cells form a very nice dependency graph that makes it really easy to execute spread sheets in parallel across multiple cores. It is much easier to find parallelism in spread sheets than in normal programs. So, in a good spread sheet program, you can get iterative programs and parallel execution with very little work. How cool is that?

Excel lets you write general purpose programs as spread sheets. If you can write programs inside a tool they know people will write programs inside that tool. The first time I saw that was when a customer of the University of Utah Computer Center wrote an entire statistics package as macros in our most popular text editor. Blew my poor little mind. 
Why would anyone build a statistics package inside a text editor? Well, he was learning to use the editor, saw the part in the manual that talked about macros, and started playing with them because he thought they were cool. Pretty soon he saw he could do his statistical analysis inside the text editor. When he was done he could type in his numbers, invoke a macro, and get his results appended to the bottom of his file! Totally nuts but it worked great for him. That  is the point I am trying to make. It worked great for him. People find their own ways to use technology that works for them. No one has the right to tell them that what they are doing is wrong. You might point out what you think is an easier way. You may point out what you think is a more correct way to do the job. But, is it easier for them, is it correct for them?

For some insane reason (I used to be a victim of this particular insanity) programmers seem to think that a solution that uses less computer time is more efficient than one that uses less human time. This is an insane attitude. Computer time is cheap and it is getting cheaper at an incredible rate. Human time is priceless, at least worth a lot more than a computer. Modern computers spend most of their time in an idle loop doing nothing at all, so why not use some of that time to inefficiently make things more efficient for human beings?

I used that feature of Excel to write quick and dirty programs to illustrate concepts when all my talking, white boarding, and hand waving did not work. One day I whipped up a quick spread sheet to create values from different statistical distributions and put up animated charts of the values being accumulated into nice curves that looked just like the ones you find in text books. I was trying to help my students understand that distributions talk about the properties of a large set of samples, but not much about individual values. For some reason on that night with that group of students I just could not get the idea across any other way. That quick and dirty spread sheet saved the day. After that I started using them all over the place.

One afternoon between semesters I decided to create a series of animated spread sheet charts for use in an upcoming class. I sat down with LibreOffice Calc (the LibreOffice spread sheet program) to do the job. I have moved many spread sheets back and forth between Calc and Excel so I was very surprised when nothing I was trying to do worked. I assumed that I was just not setting some property correctly so I decided to go to the LIbreOffice web site and look up how to do it in Calc. Turns out Calc only has a severely restricted version of the feature. A version that will let you compute VAT in Europe but restricted so that you can not write general iterative programs. I decided to report the problem as a bug. I found that I was not the first to report the bug. I was not the first to report the bug by more than 10 years. I was one of thousands who had reported the bug. The bug is so old that it existed in OpenOffice, the immediate ancestor of LibreOffice, and seems to have existed in StarOffice, the grandfather of LibreOffice.

One of the people who reported the bug was a mechanical engineer who mentioned that most of the engineering disciplines have large suites of spread sheets that do different kinds of analysis. These packages make heavy use of the ability to do iterative programming in a spread sheet. This kind fellow mentioned that the utility of the spread sheets was limited because they can only run on Excel on a PC and they really would love to run them on super computers under the Linux OS. LibreOffice already runs on Linux. Removing the restrictions on the feature would not only benefit a large group of random people, if would benefit the whole world of engineers.

Considering that LibreOffice already has an artificially restricted version of the feature, and considering that if LibreOffice had a full version of the feature it could work its way into every engineering office in the world, you would think that this bug would be at the very top of the list of bugs that need to be fixed. After all, all they have to do is remove the artificial restrictions that keep it from working. Nope, the developers say it is not a bug. Why is it not a bug? Well, the developers say that you should not use a spread sheet that way, you should write a program in a programming language, if you want to do iteration. I am not making this up. I wish I were, but I am not.

The developers are telling the world that they know better than anyone else and they know you should not do that in a spread sheet. It is impossible for me to comprehend the level of arrogance or the lack of caring about other people that lets them have that attitude. It is as if they are deliberately refusing to provide this critical feature to prevent LibreOffice Calc from ever being competitive with Excel. OK, that is a little paranoid, but seriously what reason is there for the developers to prevent people from using Calc the same way they already use Excel?

The LibreOffice developers attitude is unfathomable to me. 
The open source philosophy encourages developers to listen carefully to their users. Some times you do have to ignore them. They will ask for ridiculous things. But, we are talking about an important and widely used feature of the most popular commercial spread sheet program in the world. LibreOffice Calc can never be consider as a complete replacement for Excel as long as the ability to write iterative programs is missing. 

LibreOffice is not the only OSS project that suffers from the "My way or the Highway" attitude. It is just the one I have the most direct experience with. I do believe that "Tools not Rules" is a better guiding principle.

Oct 14, 2014

Things Change

This article is based on one I wrote several years ago. I was approached by a Polish magazine to write this article and since they offered me 300 Euros I was quite happy to do it. We had a contract. They published the article but did not pay me. I hope they all die slowly of a some painful and incurable disease. Something like elephantiasis of the testicles would be appropriate. I wish that to every publisher who ever stiffed a writer.

I've updated the article to reflect 2014.

My first paid programming job was to port the game TREK73 from a Hewlett Packard 2000C mini computer to a UNIVAC 1108 mainframe. That was in 1974 and I've been involved with games and graphics off and on since then. For a while I taught game programming at the local community college. It has been 40 years since I got my first job as a game programmer way back in 1974. At the time there were few programmers who wrote games and even fewer who managed to get paid to do it. I wound up doing a lot of other things in that job, but I did get to port several games and to develop a few of my own

So what has changed in the last 40 years? Just about everything. 1974 is the year Intel released the 8080 microprocessor; the many greats grandfather of the Core i7. Before 1974 very few people could have a computer for their own personal use. After 1974 personal computers started becoming common place in the industrialized world. In 1974 I did not work on a personal computer. In fact, I'm not sure I realized I would ever be able to own a computer until a year or two later when I saw a minicomputer being advertised for “only” $50,000. I worked at the computer center of the University of Utah. We operated a time share system that provided computing power to the rest of the university. 

Lets take a look at how some of the normal parts of a game porting project have changed since 1974.

Data Exchange

To port a game you have to get the source code and resources used in the game. In 2014 I would expect the data to come to me in one of several ways. I might be sent a URL and a password that would let me down load the game over the Internet. My 100 Mbps connection (the slowest my ISP offers) makes that pretty easy. It's also possible that I would be sent a stack of DVDs or blu-rays by snail mail or FedEx. In 2014 sending gigabytes of data is simple and we don't really even have to think about how to do it.

There was no Internet in 1974. A fast data connection was 300 bits per second, not kilobits, not megabits, but bits sent via acoustic modem over an analog telephone line. There was no Ethernet then, LANs as we know them did not exist. There was no FedEx either. Any exchange of data required you to send physical media via snail mail. 

DVDs did not exist. Even CDs were many years in the future. The diode lasers needed to build CD and DVD drives were first developed in 1975. Which means that CDs not only did not exist, they were not even possible in 1974. The floppy disk was invented in 1971. The 8 inch floppy disk was the high tech wonder of the time. But, they were very rare, I had never seen one by 1974 and didn't have one on the 1108. 

In 1974 the standard media used for exchanging data were punched cards and reel to reel tape. DEC had those cute little random access tape drives, but no one else used them. The 1108 had rows of huge high speed tape drives. If you ever watch old '50s and '60s science fiction films you have seen these drives. They were the size of a refrigerator with two huge tape reels and a divider in the middle. The drooping tapes always made them look like sad faces to me.

The HP2000c that the game was coming from did not have tape drives. It did not have a card punch either. Minicomputers and mainframes didn't share many of the same peripherals. The only media the two machines shared was punched paper tape. The computer center had a large number of TeleType ASR33 printing terminals. Some of those had paper tape readers. So, paper it tape it was. 

TREK73 was delivered to me on 8 channel punched paper tape. It was punched on about 6 small rolls. Each roll was wrapped with a rubber band to keep it from unwinding. The box they came in was a heavy duty cardboard box normally used to ship and store 2000 punch cards. Punched paper tape was not that commonly used in 1974, it was pretty much obsolete, but it was the only medium the 1108 and the HP2000C had in common. Since the bits on paper tape are large enough to see with the naked eye you can guess that TREK73 was not a large program.

The UNIVAC operating system, EXEC8, had a special command for reading paper tapes from ASR33 terminals. To read the tapes I first had to go to the system programmers and get a special account that was allowed to run real time commands. Then I had to figure out how to use the cryptic command that let the 1108 accept input from the paper table reader, and learn to start the tape reader at just the right time. I had to try several terminals before I found one that worked at all. It took a combination of special permission, skill, and luck to get a paper tape to read all the way through. I was not what you would call "lucky". The tape reader would just stop at random times. I don't remember being able to read a single tape all the way through in one try.

I had to read the tapes in pieces and assemble the pieces in a text editor. The tape reader did not always read the tapes correctly and never read them the same way twice. I had to read each tape several times and compare the files to find reader errors. I remember it took me a couple of days to get those tapes read and several more days before I thought I had a correct version of what was on the tapes. There were times when I had to count characters on the tape to find a missing piece of information and then read the bits by hand. They were read in at the astonishing speed of 10 characters/second over a 110 bits per second serial line. In 1974 the highest speed terminal I had ever seen ran at 1200 bits per second. 

Another big difference between then and now is the kind of displays we had. I did all of the editing and testing of TREK73 on printing terminals. CRT terminals were rare and very expensive in 1974. High resolution CRTs (anything over 640x480 is high resolution from the point of view of those times) were also rare and expensive. I did not have access to any WYSIWYG editors. In some editors the only way to change a line was to type in the line number followed by the new line of code. The process was slow, error prone, and horribly wasteful of paper.

The Computer

The computer center had a UNIVAC 1108 II (a.k.a. the 1108A). The 1108 was considered to be a super computer when it was installed at the University of Utah in the late '60s. I don't know exactly when it was installed, but I know it was between 1965 and 1968. The 1108 was one of the first computers to use integrated circuits and was considered to be a technological marvel of the time. Civic groups would book times to come and see our massive computer. By 1974 the 1108 was still considered to be a major computing installation. 

An 1108 was capable of being configured as a multiprocessor. In fact ours had separate instruction and I/O processors that could be used in parallel. It filled a room as big as a large house and required special power and cooling systems. Portions of the machine had to be carefully positioned over the building's structural girders to keep them from falling through the floor. This super computer had roughly a megabyte of memory (218 words with 36 bits per word) and could, with luck and careful coding, execute a million instructions per second. It had several hundred megabytes of magnetic drum storage. It sold new for $2.5 million dollars. 

I picked up a much better computer at a garage sale a few years ago for $15 dollars. I only wanted the monitor to use when I bring up new computers, but the guy wouldn't sell me the monitor unless I took the computer too. It was an ancient 486 with 4 megabytes of RAM and a 200 megabyte hard drive. (Much to my surprise the computer booted to Windows 3.5. I found the fellow's tax returns and bank records for 5 years on the hard drive. He was very lucky that I bought it rather than someone interested in identity theft!)

Moore's Law (which was first written down in 1965) says the number of transistors you can put on a chip doubles ever 18 to 24 months. The 1108 represents 1965 technology which means that Moore's Law has been cranking away for 50 years since the 1108 was developed.We can now put 225, or roughly 33 million, times more transistors on a chip than was possible when the 1108 was first designed. A quick look at the number of transistors on new chips verifies this enormous change in computer technology. Understanding a 33 million fold increase in capabilities is hard to do. But I've seen it happen. Everyone my age has seen this change.

Porting the code was a lot like porting code now days. There were a few exceptions. TREK73 was written in BASIC. To be specific it was written in the version of BASIC that HP created for their minicomputers. The 1108 was running a version of BASIC called Real Time Basic (RTB) that was developed by SUNYA (State University of New York, Albany). Neither of these versions of BASIC were exactly the same language that was originally called BASIC by its inventors. In 1974 BASIC was only ten years old but there were already several incompatible versions. There were no standards for the language and Microsoft Basic didn't show up until 1975. Every version of BASIC was proprietary and used by only a small number of people on a small number of computers. 

The hardest problem I had to solve in porting the code was getting a manual for HP2000 BASIC. I couldn't get one from HP. I couldn't get one off of the non-existent Internet. And, I couldn't buy one at a book store. I managed to find a friend of a coworker who worked at a company that had an HP2000. He was willing to let me come over to his office and read his manual. It took a couple of weeks to find a copy of the manual and I had to drive 20 minutes each way to read it. I can't imagine that happening any more. If you can't find the documentation you need on the Internet you can buy it through your favorite online book store and have it shipped to you over night. 

The other problem I had is that RTB had a hard limit on the number of lines of code a program can have. Really, I'm not making this up. An RTB program could have no more than 1000 lines of code. TREK73 was more than 1000 lines long. RTB did provide a way to chain from one program to another while retaining the values of all variables. So I was able to work around the size limitation. But, a lot of my porting time was spent factoring the program into chunks that were small enough to fit inside RTB's program size limits. This problem points to the small amount of memory on the 1108 and to the fact that RTB was a time sharing system that supported up to 50 simultaneous users on a computer with capabilities many times smaller than what you find in a modern cell phone.


It is hard to describe the state of computer graphics in 1974. Computer graphics existed, SIGGRAPH ( had been established the previous year. The SIGGRAPH history page ( will give you a better feel for it than I can. Think about it this way, my current graphics card has over 1,000 cores and 2 gigabytes of RAM. My graphics card has 2,000 times as much RAM and is at least 1,000,000 faster than the 1108. Modern computer graphics was not even science fiction in 1974.

The computer center had two graphics output devices. A Tektronix 4010 vector scope and a huge flat bed plotter. The 4010 used a storage tube display. You could draw lines, points, and text, but every write was cumulative. It was like drawing on paper with a pencil and no eraser. You could clear the entire screen, but not just one line. It was impossible to do animation with that device.

The flat bed plotter was a large table that used a suction system to hold paper flat while an ink pen attached to a turret mounted on a sliding arm moved back and forth actually drawing vector graphics in ink on the paper. The plotter was not connect directly to the mainframe. You wrote commands on tape and then waited for a human being to mount the tape on the plotter's controller, load the paper, position the pen, and press the start button. Then you waited for a human being to remove the paper, roll it up, and hand it to you. The backlog on the plotter ranged from minutes to days. It was not uncommon to wait two or three days for a plot to complete. That made debugging plotter programs a very costly and time consuming task. You could use the plotter to draw maps for use in table top games, but it was useless for interactive games.

The graphics library for the graphics scope and the flat bed plotter were incompatible. I made myself very popular by writing a package that emulated the plotter graphics library on the 4010. I wrote that package because I was writing a program to print out star charts used during one of the first Star Fleet Battles campaign games. I was a pretty hard core Trekker in those days (still am!) and I met the folks who developed that game through the local science fiction club. I was doing the star charts on my own time but on the universities computer (which I was allowed to do as long as I didn't use too much in the way of resources). To save plotter resources and to save my own time I had to find a faster way to test plotter programs. I figured that if it was useful to me it would be useful to everyone, and it was. It is always a good thing to turn your personal projects into something that can help the community.

I remember that the computer science department got a frame buffer some time during the early '70s. Yes, one single frame buffer. The few references I have found indicate that a 640x480 by 24 bit frame buffer sold for $80,000 in 1975. That's more than $350,000 in 2014 dollars. Graphics systems with the capabilities of current 3D video cards were not possible at that time. There was no technology that would allow you to build a machine with the combination of speed, computing power, and memory of a $19 bargain bin thrift store video card. The first affordable (less than $1,000) video display terminals didn't appear until 1977.

My version of TREK73 used only character graphics, on a printing terminal. Which is all I had access to at the time. As you played the game every move you made was printed on a long sheet of paper that piled up behind the terminal. People would roll the paper into scrolls and save their best games, sometimes tacking them up on the walls in their offices. It was the most popular program at the University of Utah Computer Center.

On the other hand, video games like Pong had already taken over the bars in the US. The difference in the game experience between what you could get on a dedicated video game and a general purpose computer was astonishing. Of course, the video games didn't have frame buffers, they either used vector graphics or they generated the video signal algorithmically.

In 1974 you could see what was going to be possible, but it wasn't until the Apple II came out in 1977 that the rest of us could start doing real computer graphics at home. The Apple II was affordable compared to what went before, but a machine with enough RAM to be useful and a floppy disk still cost the equivalent of 15% of my full time salary in '77. Not exactly cheap.

I had a ringside seat to the development of computer graphics in the early '70s by being a student at what was at the time the number one computer graphics school. Of course, I mostly studied compilers while I was there. I didn't realize that the graphics work being done at the computer science department was anything special! I only got interested in graphics because I saw it as an interesting form of output that compilers should understand, and later because of games.


There was no sound in TREK73. Which is a good thing because the 1108 had no sound card. The only way to make it play music was to shake tapes at audio frequencies by doing very short reads and writes on the tape drives

There was a program that would play polyphonic music on the 1108 by going into real time mode and shaking several tape drives, one for each voice, at the same time. This was a stunt we liked to trot out for tour groups and we had a good selection of Christmas tunes that we played during our yearly Christmas party. Caroling in the computer room with musical accompaniment provided by a 2.5 million dollar computer. Oh my...

Over the next 15 years game programmers learned to do some amazing things with sound using hardware timers and the lonely speakers built into computers. Those speakers were designed to go "beep" when you made a mistake, or when a computer was having trouble booting. We used them to play back pulse code modulated music and voice. I bought my first sound card, an original Creative labs Sound Blaster card, shortly after they were released in 1989. I used that card in several different computers for nearly 10 years


Before the 1970s a local network was a proprietary set up for connecting one manufacturers terminals to the same manufacturers mainframe computers. IBM had their way of connecting their terminals to their mainframes. UNIVAC had its way of doing it, and so on. There were companies that made lower cost clones of the different types of terminals and there were also serial terminals like the TeleType ASR33 that would work with any computer with a serial port. But, in 1974 LANs as we know them were research projects. Not common tools. It seems the very first experimental Ethernet was built in 1972 but wasn't patented until late 1977 and didn't become widely available until the early '80s. I remember first using an Ethernet network when I went back to graduate school in '81. Ethernet was so expensive that very few home computer owners could afford it until the '90s.

In 1974 the computer center had a serial line network that ran to rooms filled with printing terminals in the engineering college, the business college, and the library. As I remember it there was a separate set of wires for each remote terminal. We had no multiplayer games and RTB was not capable of supporting multiple users connected to a single program.

The first networked games worked over proprietary networks. The first networked games I saw worked between two computers connected via serial cable. That was in the late '70s. I, and a lot of other people, spent a lot of time during the '70s, '80s, and early '90s trying to come up with a cheap fast network to use for games. The best idea for a cheap network that I saw during that time converted a serial connection into a kind of token passing network. It looked like it could be implemented using only a few diodes and a smart serial port driver. It looked like a good idea at the time, but I was never able to make it work. The only reference to the idea I've been able to find on the net are some old emails from '91. Which is about the time that Ethernet started to be affordable. If you could afford a computer in the 90s you could most likely afford an Ethernet card

Programming Languages

Today we have a huge number of programming languages to chose from. And, while we can pay for commercial implementations of our favorite language there is almost always a high quality free version of the language just a down load away. If I want a C++ compiler I can have one or buy one. If I want Pascal or Python or Perl I can just down load them and use them. That was not the case in 1974. Most of those languages did not exist. C was developed between 1969 and 1973. I have a copy of the original K&R "C" book. It has a copyright date of 1978. I learned C in '81 or '82 and it was not widely available even then. I read the first paper on C++ published in "SIGPLAN Notices" in 1986. I learned Pascal by reading the original compiler with Norwegian comments some time around '75. I remember when PERL was announced on USENET and I remember hearing about this great new language called Python when it came out in the early '90s.

The majority of programming on the 1108 was done in dialects of FORTRAN and COBOL. A lot of student work was done in Algol 60 and LISP. We also had a couple of simulation languages including the original object oriented language, Simula. Not that anyone used it.

The point is that most of the languages we use today either did not exist or were not available to programmers in the middle 1970s. Object oriented programming was only known and used by a small group of academic and simulation programmers. Not to mention that the closest thing to an IDE was the line oriented text editor in BASIC.

The truth is that because we had so few tools we didn't get a lot done. But, the sad fact is that programmers today write about the same amount of code in a day as we did then. The advantage we have now is that a single line of code can invoke a library function that we don't have to write. In the bad old days we often had to start by writing the tools to make the tools needed to write the libraries we needed for a project. To write my first program that used 3D texture mapping I had to write my own 3D library and my own texture mapping code. To do that I had to develop a fixed point math library. All that added months to the project. Now, I can get a better visual effect using a few lines of OpenGL or DirectX code. IDEs make life much easier by automating many tasks that programmers used to have to do by hand. And, let me give a hand to the people who write source level debuggers! I can't tell you how much of my life has been spent working on systems with no debugger or with machine language only debuggers. Source level debuggers can save a lot of effort. On the other hand, I can't remember the last time I had a bug that was so trivial that I could find it with a debugger of any kind. 

Post Script

From the point of view of this programmer, any time before about 1998 are the bad old days of computing. I get nostalgic for the mid to late '70s because that is when personal computing started to get fun. But, it wasn't until the late '90s that computers became cheap enough and powerful enough to let me do any of what I was imaging back in the '70s. Now, I hope I can manage to live long enough to do all the projects I wanted to do back then!

Oct 12, 2014

Nine Black People in Ullapool...

Recently my wife and I took our first trip to Scotland. It was a wonderful trip. I have a “thing” about castles. I have read many books on the subject. But, I had never seen a real castle before. Now I have.

People ask about what I liked best about my trip and I have to say that it was the Scots themselves. Yeah, we got the hairy eyeball from a couple of people walking down the street. And, we got stared and snickered at by a couple of older people dressed in rather tattered looking formal wear when we went into a place near Stirling Castle to have lunch. But, the vast majority of the people we met in Scotland were wonderful. Simply wonderful. They were as friendly and direct as Texans, but maybe more polite, if that is possible.

There was one lady we met who made my wife and I face just how different things are in Scotland than the US. We were in the town of Ullapool on the west coast of Scotland in the highlands. Ullapool is an old fishing village that now lives mostly on tourism. It is just a beautiful town in a beautiful place full of nice people.

Our last morning there my wife and I had stopped at a coffee shop on the main road that has the ocean on one side and the town on the other. The shop had several tables outside so we took our cappuccinos and freshly baked scones out front so we could enjoy the weather and the wonderful view of the bay. After a while my wife wanted to do some more shopping and I did not. So, she went off shopping and I got another cappuccino.

It turns out that the second chair at my table was the only empty chair in front of the shop. A nice looking lady, may be my age, may be ten years older, asked if I minded if she sat their. Of course I invited her to sit. A conversation started and we talked about traveling and about the US and Scotland. We talked about politics and raising kids. It was a lot of fun.

After a while my wife joined us and we all continued to talk. I love talking to strangers. You meet some of the most interesting people that way.

Ullapool is the kind of touristy place where tour buses stop and release their human cargo to wander around, look at the scenery, buy souvenirs, get a drink, and eat a meal before they hurry on to the next stop.

As we were sitting there talking the nice lady (We never did get her name) bent over and said “Well, we've certainly had our dose of diversity for the day!”
I asked “How so?” I had seen nothing particularly “diverse” in Ullapool that day.

She replied cheerfully “I've counted 9 black people in town so far today”.

My wife and I looked at each other. It was a true WTF moment. I told her “We live in Texas, a majority minority state.”

She asked “What does that mean?” In what I can only describe as a querulous voice.

I defined the term this way “In Texas, if you add up all the blacks, browns, yellows, reds, purples... What have you. They out number what you would call the 'whites' in the state.” I went on to say that “When I look down the entire length of Ullapool and see only pasty white faces it is surprising, almost disturbing to me.”

In a very thoughtful voice she replied “I had never thought about that.” And then she changed the subject.

The exchange really caught me by surprise. I long ago realized that most people tend to think of the whole world as being just like their own backyard. But, this example of that really caught me by surprise. What I think of as a normal would be very surprising to this lady. Oddly enough she had told us that she had traveled extensively in the US. She had even taken a driving trip through Texas.

I tend to over generalize, I was about to say something comparing life and attitudes in Scotland with life and attitudes in Texas based on my experience with one nice lady on a sunny day in Scotland. What I can say based on my experience with her is that your life experiences shape you. She had lived most of her life in an area that is made up mostly of a single ethnic group that she strongly identifies with. She is a Scot, that is her nationality and her ethnicity. Like most Americans I have lived most of my life not belonging to a specific ethnic group though most people would call me “white”. But, white is not an ethnic group, any more than it is a race, it just means I have to wear a long sleeved shirt to the beach.

Aug 17, 2014

Oh Unicode, Why Are You So Frustrating?

I do, you know, find Unicode to be truly frustrating. For a long time I've wanted to write a long loud flame about everything I dislike about Unicode. But, I can't. You see, every time I run across something that frustrates me about Unicode I make the mistake of researching it. If you want to really hate something you have to completely lack the intellectual ability, and honesty, to research that hateful thing and try to understand it. If you do the research, you just might find out that there are good reasons for what you hate and that you could not have done better. Jeez, that is frustrating.

I wish the folks who defined Unicode had realized that they were not going to get away with a 16 bit code. But, when they started, 23 years ago, 16 bits certainly seemed like enough. Now there is just too much old code, including a bunch of compilers, that have the built in belief that a character is 16 bits. Unicode now requires a 21 bit code space. I wish the folks in charge of Unicode would just admit that it is going to be a 32 bit code some day so that the rest of us would be forced to design for a 32 bit character set. Could I have done better? Could I have avoided their mistakes? Hell no. I was perfectly happy with seven bit US-ASCII. Back in the late '80s I was vaguely aware that Chinese used a lot of characters, enough that I was suspicious that they would not all fit in 16 bits, but I really didn't care. I'm pretty sure I would have focused on the languages I was most familiar with and ignored every thing else.

Unicode has forced me to understand that I was(am?) a language bigot. Nobody really wants to come face to face with their character (sorry about that) flaws. I am a 12th generation American and I speak American English. In school I studied German, French, and Spanish. The only one of any use to me is Spanish. I once met a guy from Quebec who refused to speak English, but he sure understood it. I have never met anyone from Germany or France who did not speak English. But, living in Texas and the US southwest knowing a lot more Spanish than I do would be useful. I have met and worked with people from dozens of countries, but they all speak American English. Only folks from the UK seem to make a big deal about UK English and they don't seem to care if they can be understood. Wow, that sure sounded like it was written by a language bigot didn't it. Or, may be it just sounds like it was written by a typical monolinguistic American.

Unicode has taught me more about how the majority of humans communicate than anything else. It has forced me to face all those other scripts. It has forced me to think about characters in a completely new way. Until I tried to grok Unicode I had no idea that most scripts do not have upper and lower case letters. I did not know that some scripts have three cases, lower case, upper case, and title case. I always believed that every lower case letters has one, and only one, upper case equivalent. I did not know that there is not always a one to one relationship between upper and lower case characters. The old toupper() and tolower() functions I know and love are now pretty much meaningless in an international, multilingual, environment. So if you want to insure that “are”, “Are”, and “ARE” all test equal you can't just convert them to upper or lower case and do the comparison. You have to do case folding according to a Unicode algorithm and then do the test.

I hate modifier characters. (They are also known as combining characters.) The idea that there are two ways to represent the same character is enough to drive one to insanity. You can code “ö” as either a single character or as two characters, an “o” followed by an Unicode diaeresis (U+0308). Modifier characters of different sorts are used all through Unicode.
Why does Unicode have modifier characters? Why not just have unique encodings for each of the different combinations of characters with special decorations? It would make life so much easier! And, it is not like they do not have plenty of code points to spread around. Well, Unicode has a policy of being compatible with existing character encodings. That doesn't mean you get the same code points as in the old encoding. (Or, maybe it does! It does at least sometimes.) But, it does mean that you get the same glyphs in the same order so that it is easy to convert existing data to Unicode.

Making existing characters sets fit into Unicode with no, of few, changes just makes too much sense. If you want people to use your encoding then make their existing data compliant by default or with a minimal amount of work. That is just too sensible to say it is wrong. Guess what? The old character encodings had modifier characters. Why? I do not know but I can guess that it is because they were trying to fit them into a one byte character along side US-ASCII. I mean, US-ASCII already has most of the letters used in European languages so it was easier to add a few modifier characters than to add all the characters you can make with them. I mean, 8 bits is not much space. I can hate modifier characters all I want but I can't fault them for leaving them in. And, once they are in, why not use the concept of modifier characters to solve other problems? Never waste a good concept.

How do modifiers make me a grumpy programmer? What do I have to do to compare “Gödel” and “Gödel” and make sure it comes out equal? I mean, one of those strings could be 5 characters long while the other is 6 characters long. (Well, no, they are both 5 characters long, it is just too bad that the representation of one of the characters may be one code point longer in one string than in the other one.) So, first you have to convert the strings to one of the canonical forms defined by Unicode, do case folding, and then compare them. Since the strings might change length you may have to allocate fresh memory to store them. So what used to be a very simple operation now requires two complex algorithms be run possibly allocating dynamic memory and then you can do the comparison. Oh my...

All that complexity just to do a string comparison. OK, so I had to face the fact that I am a cycle hoarder and a bit hoarder. I grew up back when a megabyte or RAM was called a Moby and a million cycles per second was a damn fast computer. I'm sitting here on a machine with 8 gigabytes of RAM, 8 multibillion cycle per second cores (not to mention the 2 gigs of video RAM and 1000s of GPUs) and I am worried about the cost of doing a test for equality on a Unicode string. When you grow up with scarcity it can be hard to adapt to abundance. I saw that in the parents who grew up during the Great Depression (and a very depressing time it was). My mother made sure we did not tear wrapping paper when we unwrapped a present. She would save it and reuse it. I worry about using a few extra of bytes of RAM and the cost of a call to malloc(). I am trying to get over that. Really, I am. Can't fault Unicode for my attitudes.

I wish that Unicode was finished. It isn't finished. It is a moving target. The first release was in October 1991, the most recent release (as I write this) was in June 2014. There have been a total of 25 releases in 23 years. The most recent release added a couple of thousand characters to support more than 20 new scripts. Among other things after this long they finally got around to adding the Ruble currency sign. (If I were Russian I might feel a little insulted by that.) You would think they could finish this in 23 years, right? Wrong. It takes that long just to get everyone who will benefit from Unicode to hear about it, decide it is worth working on, and finally get around to working on it. It takes that long. It will take a lot longer. Get used to the fact that it may never be done. I hope that the creativity of human beings will force Unicode to add new characters forever.

Each release seems to “slightly” changed the format of some files, so everyone who processes the data needs to do some rework. I have never been able to find a formal syntax specification for any of the Unicode datafiles. If you find one please let me know. It would not be that hard to create an EBNF for the files.

I do wish they would add Klingon and maybe Elvish. That is very unlikely to happen. If they let one constructed script in they would have a hard time keeping others out. I can see people creating new languages with new scripts just to get them added to Unicode. People are nasty that way. Unicode does have a huge range of code points set aside for user defined characters. But that doesn't seem to be much use for document exchange. There needs to be a way to register different uses of that character space.

I hate that I do not fully understand Unicode. There are characters in some scripts that seem to only be there to control how characters are merged while being printed. Ligatures I understand, but the intricacies of character linking in the Arabic scripts are something I probably will never understand. But, if you want to sort a file containing names in English, Chinese, and Arabic you better understand how to treat those characters.

During the last 40 years I have learned a lot of cute tricks for dealing with characters in parsers. Pretty much none of them work with Unicode. Think about parsing a number. That is a pretty simple job if you can tell numeric characters from non-numeric characters. In US-ASCII there are 10 such characters. They are grouped together as a sequence of 10 characters. OK, fine. The Unicode database nicely flags characters as numeric or not. It classes them into several numeric classes and gives their numeric values. Should be easy to parse a number. But, there are over a hundred versions of the decimal digits in Unicode including superscripts and subscripts. If I am parsing a decimal number should you allow all of these characters in a number? Can a number include old fashioned US-ASCII digits, Devanagari digits, and Bengali digits all in the same number? Should I allow the use of language specific number characters that stand for things like the number 40? Should I recognize and deal with the Tibetan half zero? How about the special character representations of the so called “vulgar fractions”? Or, should I pick a script based on the current locale, or perhaps require the complete number to be in a single script? What do I do in a locale that does not have a zero but has the nine other digits?

(I must admit to being truly amazed by the Tibetan half zero character. Half zero? OK, it may or may not, depending on who you read, mean a value of -1/2. But, there seems to be no examples of it being used. And they left out the ruble sign until 2014?)

How about hexadecimal numbers? Well, according to the Unicode FAQ you can only use the Latin digits 0..9 and Latin letters a..f, A..F and their full width equivalents in hexadecimal numbers. I can use Devanagari for decimal numbers but I have to use Latin characters for hexadecimal numbers. That does not make a great deal of sense. This is an example of what would be called the founders effect in genetics. The genes of the founding population has a disproportionate effect on the genetics of the later population. English has been the dominant language in computer technology since its beginning and seems to be forcing the use of the English alphabet and language everywhere. What a mess.

You run into similar problems with defining a programming language identifier name. Do, you go with the locale. Do you go with the script of the first character? Or do you let people mix and match to their hearts content? I can see lots of fun obfuscating code by using a mixture of scripts in identifier names. If you go with the locale you could wind up with code that compiles in Delhi, but not in Austin, Beijing, or London. I think I have to write a whole blog on this problem.

I've used the word “locale” several times now without commenting on what it means in regard to Unicode. The idea of a locale is to provide a database of characters and formats to use based on the local language and customs. Things like the currency symbol, locale gives you “$” in the USA, “¥” in Japan, and “£” in the UK. Great, unless you have something like that might want to list prices in dollars, yen, and pounds on a computer in the EU. Locale is for local data, not for the data stored in the database. You use the locale to decide how to display messages and dates and so on on the local computer. But, it does not, and can not be used to control how data is stored in a database.

This is a great collection of grumps, complaints, and whines. It was mostly about the difference between my ancient habits and patterns of thought and the world as it really is. Writing this has helped me come to grips with my problems with Unicode and helped me understand it better. The march of technology has exposed me, and I hope many others, to many languages and the cultures that created them. One of the worst traits of humans is the tendency to believe that the rest of the world is just like their backyard. Without even realizing how rare a backyard is!