ITE 221 — PC Hardware and O/S Architecture Paragraphs

By Charles Wetzel

These paragraphs (fourteen in total) are assignments I did for my ITE 221 course, a course in PC hardware and operating systems architecture. I got an 'A' in the course, and had some fun writing these paragraphs. Basically, for the paragraphs, we were assigned to find a Web site or Web page about a topic mentioned in the chapter and research it. I made sure to research some interesting things for each and every paragraph. I'm not sure if this page will be interesting to anyone but me, but I'd like to preserve these essays since I worked hard on them.

Chapter 1

http://www.bls.gov/oco/ocos110.htm
This is the Bureau of Labor Statistics section on computer programmers. The reason that I feel it is relevant to Chapter 1 is that Chapter 1 expounds on applications programmers, systems programmers, CIOs (Chief Information Officers), and many, many other job titles. I figured it would be good to research applications programmers since I will likely start programming as an applications programmer. According to the Bureau of Labor Statistics page that I found with a Google search, applications programmers tend to be more entry-level than systems programmers. Many systems programmers start as applications programmers and move into systems programming, which is more profitable and requires more technological knowledge. In reference to all types of computer programmers, including applications and systems programmers, 8 in 10 have at least an associate's degree, roughly half have at least a bachelor's degree, and 2 in 10 have a master's degree or higher. Overall, 68% of programmers, including applications and systems programmers have at least a bachelor's degree. However, almost all of the new hires are required to have one, which is why I'm striving for one. Entry-level salaries supposedly start out at between $49,000 and $50,000, but in today's economy, I can't help but be a bit skeptical about that. In terms of job outlook, it's not looking good. The Bureau of Labor Statistics, part of our eternally-optimistic government, projects a -4% growth in jobs from the period of 2006 to 2016. Jobs will decrease from 435,000 to 417,000, they say (I would bet any amount of money that jobs in the US will decrease even more than that, though). However, they say positions will still be fairly abundant for people with bachelor's degrees and experienced programmers, because many older and experienced programmers are retiring or changing professions. Still, it is a highly outsourceable type of work, and both the Bureau of Labor Statistics and popular opinion both seem to say that companies are more inclined to outsource the work if they can. Because of this, jobs in software engineering and computer consulting, which are more localized and less outsourceable, are thought to be more secure than just regular, plain vanilla programming jobs.

Chapter 2

http://www.strassmann.com/pubs/datamation0297/
When I read Chapter 2 in our textbook, I found Grosch's Law to be interesting, so I did a search for a Web site on that topic and did research on that. Grosch's law was formulated in either 1952 or 1965 (the book says 1952, but another source I read says 1965). Grosch's law states that a computer's power approximately the square of its cost. So in other words, if one computer is only half as expensive as another computer, it will have only one quarter the processing power. All this information was contained in the textbook Systems Architecture, essentially, but reading the above Web site provided some interesting insight into predictions that were made at the time that people still believed the Law, and how these perceptions changed. At the time that Grosch made his Law, people took stock in it and bought large, powerful computers instead of smaller computers with less power because of the philosophy that the more money spent, the higher the return on the investment. This led to one computer scientist proclaiming that in the future, the world would contain about five major mainframes that would serve the entire earth's population via timeshare. The logic was that if several countries purchased a multi-billion or multi-trillion-dollar computer, its power would be incomparably higher than any mid-range computer. However, people started to find fault with Grosch's Law in about 1986, when an article was published criticizing corporate spending on computers. Various articles published between 1986 and 1994 showed how although the United States spent much more on computers than other developed countries, the returns were negligible (one company put the returns at about 1%). Today, at least in personal computers, Grosch's Law seems counterintuitive. These days, the more GHz a processor has, the more its price skyrockets. A person can buy an old 1 GHz processor almost for free, but a 3 GHz processor would still cost quite a bit of money. However, going back to Systems Architecture for a moment, they claim Grosch's Law still holds with certain classes of computers (I believe mainframes were mentioned). In any case, Grosch's Law is interesting because it was a prediction that everyone believed for about 20 years, and it now has almost no credibility because it does not describe the current computer market.

Chapter 3

http://www.unicode.org/history/summary.html
The Web page above is about the history of Unicode. Unicode is of special interest to me because I live in South Korea, and without 16-bit encoding schemes, storing Korean on a computer would be much more difficult (requiring either linear hangeul, which is hard to read, or some other more difficult method than Unicode). My source is the official Unicode Web site. Basically, Unicode grew out of several earlier efforts to have a 16-bit encoding scheme that supported international scripts. Some of these were 16-bit ISO standards, the 16-bit system used on the Xerox Star computer that came out in 1980, and methods used on Far Eastern computers (since all major Far Eastern use Chinese characters to some degree, and Chinese characters number in the thousands, an 8-bit encoding system is practically impossible). The development of Unicode spanned the 80s and went into the early 90s (most of the major work on Unicode seems to have been done in 1990, leading up to the end of the year). Some of the last components to be put into Unicode were Arabic glyphs and the Far Eastern scripts. It is important to note that to keep the number of characters down (since there are theoretically 50,000 Chinese characters) some characters were deleted. This has been known to cause problems and controversy.

Chapter 4

http://www.intel.com/museum/archives/4004.htm

Summary of the Intel Corporate Article on the Intel 4004

By Charles Wetzel

The early history of computing has always fascinated me, so I decided to do my 150-word summary on the Intel 4004, the world's first microprocessor. Chapter 4 mentions the CPU and how it contained only 2,300 transistors. Intel created the 4004 in November, 1971 in response to an order placed by the Japan-based Nippon Calculating Machine Corporation. The corporation needed chips for its printing calculators. Prior to that, most computing devices like calculators needed dozens of integrated circuits that all performed specialized tasks. However, the Intel 4004 was unique in its time in that it combined the control unit, the ALU, etc. onto one chip. Though the calculator had a total of four chips designed by Intel, this generally continues to be the case on modern PCs where other chips handle specialized functions like inputting from/outputting to primary storage, etc. Another of the chips Intel made along with the 4004 was a ROM chip to allow people to store programs that utilized the 4004. The 4004 was very multi-purpose and supposedly contained the same computing power found in the 18,000-tube ENIAC. Eventually, the Nippon Calculating Machine Corporation sold 100,000 Busicom calculators of the model that used the Intel 4004. Although Intel initially had to give Nippon Calculating Machine Corporation the rights to the processor design, the two companies agreed that Intel could retain the rights to the CPU technology as long as the Nippon Calculating Machine Corporation could receive a discount on the processors. They made this decision because they were in dire financial straits, unlike Intel, which was planning ahead for future chip development. Later on, the Intel 8080 (8-bit word size) succeeded the 4004 (which only had a 4-bit word size), and eventually 16-bit processors like the 8088 in the first IBM PC succeeded the 8080. The first 32-bit Intel processor was the 80386, and then for about 20 years, Intel didn't increase the number of bits until creating 64-bit processors for home computers in response to AMD doing it slightly earlier. However, size increases in the word lengths for CPUs are unlikely in the near future because very few applications use more than 64 bits for most things, and increasing to 128 bits could mean a threefold increase in price per chip (this part is from the text).

Chapter 5

http://www.psych.usyd.edu.au/pdp-11/core.html

Chapter 5 mentions core memory in passing as an obsolete technology, and types of RAM and also secondary storage are mentioned. I love researching old computer systems from long before I was born, so I latched onto core memory and decided to research it. The above site explains it this way (I hope I get this right). Core memory was one of the first types of Random Access Memory. Whether it is primary storage or secondary storage is actually kind of arguable, it appears, because unlike modern RAM, it generally retained its state even when the computer was turned off. It was basically a cross between what we think of as magnetic storage and RAM. Basically, there was a mesh of wires (the site above mentions four wires per core, but I've read different numbers in other sources). Strung on the mesh of wires like beads were the cores, which were made of a ferrite material. These cores held a magnetic charge. There were X wires and Y wires that ran through the cores. If only the X or only the Y wire running through a core was charged, there would be no change to the core. This prevented other cores on the same X- or Y- line from being rewritten when a specific core was being rewritten. However, if both X and Y were charged, the core would change its magnetic charge. There was also a sense wire and an inhibit wire. The technical explanation for how these wires work did not make complete sense to me, but basically what I gathered is that in the old days, a read cycle destroyed the memory location/set it to zero, and it had to be set back to its value with a write operation. Anyways, the reason this type of storage was less volatile than today's RAM is that once a ferrite core held a charge, it basically kept the charge as long as the charge wasn't changed -- even after the power was removed. However, during system powerdown, the voltage in the wires would change rapidly, so special circuitry had to be created to counter this. As for the history of core memory, it was first conceived in 1951 at MIT. Access times were initially extremely slow (6 microseconds) but later got much, much faster. Still, compared to modern RAM, it was extremely slow. This concludes my paragraph on core memory, a vital part of early computer systems such as the IBM 360.

Chapter 6

https://www-01.ibm.com/chips/techlib/techlib.nsf/techdocs/D21E662845B95D4F872570AB0055404D/$file/2053_IBM_CellIntro.pdf

This is IBM's Web site, specifically a technical overview of the IBM Cell processor (the same one that is used in the PlayStation 3). This relates to the things discussed in Chapter 6 in that the document discusses heavily things like RAM controllers, multiprocessing, etc. related to the system bus, a major theme of Chapter 6. Though the document was written at a higher level than I can deal with comfortably, I nevertheless learned some interesting pieces of information about the Cell processor. In regard to its RAM controllers, they are Rambus XDR controllers, and the bandwidth is a whopping 25.6 gigabytes per second. The textbook quoted bandwidth numbers for desktop PC RAM that was much, much lower than this figure. Another interesting fact about the Cell is that it uses extensive multiprocessing. According to this document, there are eight SPE units (Synergistic Processer Element). Each SPE is 3.2 GHz and they work in a multiprocessing configuration, in which eight bytes of data can be passed from one processor to another per clock cycle. This is a huge amount of inter-SPE bandwidth considering how many clock cycles are executed per second. Since each SPE is 3.2 gigahertz, and since eight bytes can be transferred per clock cycle, this means 25.6 gigabytes can be transferred between one processor and another per second! Furthermore, another topic that was covered in Chapter 6 (L1 cache) was mentioned in the technical document by IBM. Each SPE has 256 kilobytes of L1 cache (just referred to in the document as "memory," but since it's on-processor, it's probably L1 cache). This is in response to the phenomenon the document describes in which processor speeds have increased exponentially, but RAM speeds haven't increased so much, leading to far more wait states. The 256 kilobytes of L1 cache per SPE are designed to help prevent too many wait states. According to the document, predictive processing has already started to near its limits, and this L1 cache is essential in today's world. Finally, the Cell processor apparently achieves compatibility with both Power and PowerPC architectures, making porting of existing operating systems (for example, Yellow Dog Linux) much easier. One of the SPEs in the PlayStation 3 is specifically dedicated to the operating system being run.

Chapter 7

http://patater.com/gbaguy/day4n.htm

For this week's assignment, I chose to research palettes, a topic of Chapter 7. The above page is part of a 30-part site on Nintendo Entertainment System programming and describes the palette system in the original Nintendo Entertainment System (released in the US in 1985, so it is very primitive). Since I hope to someday become a game programmer, and since it is my goal to write a simple, non-bank-switching Nintendo game in 6502 assembly by the end of this year, this particular page is relevant both to our course and my end-of-the-year goal. Anyways, this document describes that the NES is capable of working with 32 possible colors at a time. These colors are stored as individual bytes -- one byte per color. Therefore, the color palette on the Nintendo Entertainment System (from now on known as the "NES") is exactly 32 bytes. In the case of this programming tutorial, they tell the learner that the palette is loaded into memory location $3F00. Now, whether this memory address can be changed or not, I'm not sure. As for limitations, only four colors from the palette of 32 colors can be used on any sprite (and sprites are 8x8 or 8x16). Keep in mind that one of these is a background color, which is generally set to black. Only four colors can be used per 16x16-pixel piece of background, as well. Therefore, basically the only way a person could accurately render something like a low-resolution photograph on the NES would be to do it in dithered four-level grayscale, because an image with diverse colors would be unrenderable due to the four-color limitation. In order to make a palette file, one can use a palette editor or a hex editor. Since the palette editor is written in Visual Basic and needs runtimes to run which I don't have on my computer (and which I was unable to install), I suppose I'll need to use a hex editor to edit my palette values for my end-of-the-year goal's game. The palette file, at least in the case of the most commonly-used Nintendo assembler, should be in .pal format. The address of the palette in memory is loaded using lda commands to get part of the memory address into the accumulator (register a) and then store it into the memory (I believe of the Picture Processing Unit, or PPU). Since the address of the palette is a 16-bit address and the 6502 is only an 8-bit processor, loading the palette's address alone takes four op codes -- two lda op codes and two sta op codes, just to get the memory address of the palette into the PPU. I've actually tried all this and wrote a simple NES program already with the help of this tutorial (mostly just cut-and-paste code) but so far I'm only able to use the first four colors in the palette, and on sprites only. I hope in the future to learn how to change the colors in a sprite, and how to do backgrounds. That'll be part of my end-of-the-year goal.

Chapter 8

http://www.google.com/patents?id=npMlAAAAEBAJ&printsec=abstract&zoom=4

This site is applicable to Chapter 8 because Chapter 8 deals with data and network communication technology, of which modems are an important part. The above site contains the patent for an early acoustic coupler modem. The abstract of the patent states that a telephone receiver is coupled tightly to the modem. The purpose of the modem is to allow transmission of data over the telephone line (obviously), but interestingly enough, there were already methods of computer networking prior to the invention of the modem, but these proprietary methods required costly lines. The telephone line was much more widely-available to the average Joe. However (and this I learned from Wikipedia), modem development was initially hampered by Bell telephone restricting which devices could be connected electrically to its phone lines (making modems in the modern sense against Bell's rules). People got around this, largely in the 60s, 70s, and into the 80s, via acoustic coupler modems which used a Bell-compatible telephone and the modem which was only MECHANICALLY, not electrically connected to the phone line via the phone. Later on, the law was changed, but this was a temporary workaround for decades. Further research outside this source that I did shows that the modem standard on the acoustic coupler modems is actually current enough where one hacker (who posted a YouTube video on the subject) succeeded in coupling a 1964 acoustic coupler 300 baud modem to his laptop and loading a Wikipedia entry using Lynx.

Chapter 9

http://www.livinginternet.com/i/ii_tcpip.htm

This site is about TCP/IP, which is related to Chapter 9. Specifically, it covers the history of TCP/IP. TCP/IP was invented in the 1970s as a successor to NCP, an earlier protocol. The primary men who invented it were Robert Kahn and Vinton Cerf. The TCP/IP standard (at least up to IPv4, which is in common use at this very instant) gives each computer a unique IP address composed of four numbers. The trouble is, the range of those numbers isn't very high, so there aren't enough IP addresses to go around. Few people back in the 70s and 80s knew this would become a problem. Now someone has developed an IPv6 standard, but adoption is slow. According to the site above, it used to be much easier to make everyone adopt a standard. To ensure that people stopped using the obsolete NCP standard and use TCP/IP instead, the developers temporarily shut down all NCP sites for a day or two at a time as a warning. This ensured that most people switched over to TCP/IP, but even in 1984 when TCP/IP became the main standard, some network computers were offline for about three months while being retrofitted for TCP/IP. One of the first TCP/IP tests took place in 1975 and involved a complex exchange of signals between two universities -- Stanford and University of London. Other nodes were involved in the exchange, like a computer in Norway, and satellites were employed. By 1983, warnings were being issued that NCP, the old standard, would soon be phased out, and the rest is history. Now virtually everyone uses TCP/IP.

Chapter 10

http://patater.com/gbaguy/nesasm.htm

At first glance, the above site (which I feel relates to application programming) may seem juvenile and may seem to lack credibility, but I downloaded several of the author's source code files and compiled them using a 6502 assembler, and he indeed knows what he's talking about, at least to the extent of being able to program applications/games for the original Nintendo Entertainment System. The reason I picked the site and the way it relates to application programming is that it covers many Chapter 10 topics -- machine code, assembly language, source code, executable code, compilers, etc. Basically, what I learned about Nintendo Entertainment System application programming is that unlike later computer platforms, the NES has no well-known C/C++ compilers and everything must be done in 6502 assembly because of the limited resources. To create and compile C++ programs would require too many resources and introduce too much inefficiency in the code. Since the 6502 in the NES runs at less than 2 MHz, programs must be written in assembly. The assembly is converted to machine code by a combination compiler/linker (for the code I compiled, I just used one .EXE program on my computer). The source code is saved as a .ASM file and includes statements like "lda #$FF" (load the accumulator with 255 in decimal) and "sta $0000" (store the value in the accumulator into memory address $0000). However, when I used this Web site to compile and link a program for the NES (which is compiled to .NES ROM format) I found that some peculiar things happen to the binary file. For example, let's say I compile my source code and make a 24 kilobyte NES ROM as my compiled output (the equivalent of a .EXE file on a PC -- a binary program). If I open a hex editor and edit the contents, the contents are non-contiguous. In other words, there is a header, perhaps several kilobytes of just null characters (0's) and then the program code. Then there are more null characters. Then some of the tile data appears. I found this non-contiguous nature to be kind of confusing, and am still trying to figure out if I can program contiguous applications (I'd like to make a 4K contiguous block of code to enter in a competition in which the maximum app size is only 4,096 bytes). In terms of other topics in the chapter the above site addresses, it addresses executable code and compilers in that it discusses how .ASM source code files are compiled/linked into .NES binaries which can hopefully run on an actual machine, and it covers compilers because a 6502 compiler is required to compile the source code into a usable format. Some commands processed by the compiler into op codes might include things like the previously-mentioned sta, lda, and other things like BNE (branch on not equal, if the not equal flag is set after a CMP operation). In conclusion, the above site is an interesting introduction to the Chapter 10 topic of Application Programming in reference to the Nintendo Entertainment System.

Chapter 11

http://www.drdos.net/documentation/usergeng/13ugch13.htm

I picked a site on DR-DOS for my mini-report on operating systems (for Chapter 11: Operating Systems). This site is of special interest because it contradicts what the textbook says and what many people believe -- that DOS is an unconditionally single-tasking operating system. Although basically true of MS-DOS, there are some DOS variants (such as DR-DOS from version 7 onward) that do support multi-tasking. The site I picked discusses how to set up multitasking under DR-DOS (currently maintained by Caldera). The program that acts as a task manager is called TASKMGR.EXE. It needs to be loaded once into AUTOEXEC.BAT to assure that it runs on startup. The key escape sequence to access the TASKMGR.EXE program at any given time is Ctrl + Esc. In order to have full multitasking, it requires an 80386 (probably due to the 80386's ability to use protected memory and virtual memory). With DR-DOS' task manager, once can actually allocate percentage of CPU devoted to each program manually! This is very useful and a feature that I believe should be included with Windows as well, because sometimes the scheduler in the OS doesn't know how to allocate resources as well as I feel I can (for example, not giving enough CPU cycles to a music player so the music skips). TASKMGR.EXE can also run on a 286, but only as a task switcher, not as a manager for multitasking. In conclusion, the statement that DOS is a single-tasking operating system isn't entirely correct because some versions of DOS actually do include multitasking.

Chapter 12

http://kerneltrap.org/node/6776

This site is about the ext4 file system mainly used in Linux. I believe it is relevant to File Management Systems, the topic of Chapter 12, because ext4 is comparable to the file systems mentioned in the chapter, including FAT and NTFS. Basically, my research using this site (and Wikipedia) has yielded that there are several choices for Linux file systems. One file system is XFS. The most popular file system for Linux is currently the ext file system family (ext2, ext3, and ext4). Correct me if I'm wrong on this. Now, this specific site talks about some of the advantages, disadvantages, and the general development process of ext4. Ext4 was in development around 2006 when this site was made, and the programmers of ext4 claim that it will be ready and stable within 12 - 18 months. New features include things like higher-resolution time stamps. Not sure exactly what that means or how it's important, but I'm assuming this means it can record the time a file was modified/accessed to a more precise decimal place. Ext4 will probably, according to the article, be released with an even-numbered version of the Linux kernel (for example, 2.6) because those are the stable, non-experimental builds. Many people, especially in the Linux kernel development community, are (were) concerned about the stability of ext4. Complexity was also a concern, as was backwards and forwards compatibility. One of the goals of the ext4 project was the allow devices formatted with the old ext3 file system to be used without being changed automatically. Apparently the project was being coordinated heavily with Linus Torvalds himself, the creator of the original Linux kernel. In conclusion, this article taught me about the features and concerns, as well as the development process regarding the ext4 file system for Linux, and the site was made circa 2006.

Chapter 13

http://folding.stanford.edu/English/Guide

For my Chapter 13 Internet and Distributed Application Services mini-report, I decided to cover the above site. It is about a distributed application service called "Folding@home." Basically, Folding@home is a project maintained by Stanford which uses computers in a grid configuration (many computers of varying specifications that are spread out geographically) to make protein folding calculations. Proteins folding the wrong way can result in many infamous and awful diseases like Alzheimer's and Mad Cow Disease. The program itself is free to install and uses the resources of millions of idle computers to function as a super powerful supercomputer. The number of platforms on which it can run are diverse, including not only Windows, but also Linux, Macintosh, and even PlayStation 3! Thanks to Folding@home, Stanford has already made several breakthroughs regarding protein folding. Thanks to this distributed networking application, new medicines or cures may be discovered. In terms of technologies covered in the chapter that the program uses, I believe it uses pipes (allowing each computer to access virtual files on other computers for communications purposes), the TCP/IP protocol, and other standard features of distributed application services.

Chapter 14

http://news.softpedia.com/news/Monitoring-a-Linux-System-With-X11-Console-Web-Based-Tools-51678.shtml

I chose to research monitors for the Linux operating system, as a monitor application is a very important tool for a system administrator (the topic of Chapter 14). Monitors allow the system administrator to monitor system performance, identify bottlenecks, find out which processes are consuming the largest time slice with the processor, etc. The above source covers some of the major monitors for Linux. Here are some famous GUI-based (X11-based) Linux monitors: KSysGuard, gnome-system-monitor, and GKrellM. KSysGuard is specifically designed for the KDE graphical environment, though it may be runnable under Gnome using certain KDE extensions. Gnome-system-monitor is a system monitor that is, obviously, for Gnome. GKrellM is a very flashy, graphical monitor that came into existence in 1999 and has been very popular with the "1337" crowd, in other words, people who want to look like advanced Linux users by using flashy applications conspicuously. I'm not sure if it's any good or not (I'm not going to install Linux to find out). There are also numerous console-based system monitors. Console-based system monitors, while perhaps not being as attractive graphically-speaking, have the advantage of using fewer system resources and interfering less with the processes being observed. Chapter 14 referenced how system monitors affect system performance that is being observed, and how some monitors only take measurements every so often so as to interfere minimally. Perhaps using a console-based monitor is another strategy -- use minimal system resources to run the monitor so as not to interfere with the operations being observed. In conclusion, the above site was an interesting introduction to Linux monitor programs, an essential tool for any Linux system administrator, and systems administration is the topic of Chapter 14.

Word count for this document: 4,877 words