Examples include word processors such as Microsoft Word , spreadsheets e. Lotus , databases e. Access , graphics packages e. CorelDRAW , and so on. Next you will look at the basic units a computer uses to perform all of its functions. The next segment: Binary, bits and bytes will give you some understanding of what is happening inside your computer. To appreciate the importance of the various breakthroughs in the history of the computer industry you will need a basic knowledge of how a computer works, and in this segment you will look at how a computer represents information.
At its very lowest level a computer operates by turning on or off millions of tiny switches, called transistors. In computers these transistors can only be in one of two states; that is, on or off. Such devices are thus referred to as two-state devices. Another example of a two-state device might be a conventional light switch. It is either on or off, with no intermediate state. In mathematics the term binary is used to refer to a number system which has only two digits, that is 1 and 0.
The number system we use in everyday life has ten digits, 0 to 9, and is called denary. The binary system is the smallest number system that can be used to provide information.
Any number from our normal, denary system can be represented in binary; 0 in denary is 0 in binary. Similarly 1 in denary is 1 in binary.
When you get to 2 in denary you have a problem. There are no more symbols in binary; you are restricted to only 1 and 0. So how do you represent two? This question is similar to asking how you represent ten in denary.
Once you get to nine you have run out of digits, so you simply create a new column and start afresh, using 1 and 0. This is also what you do in binary, so 2 in denary becomes 10 in binary.
When you move on to 3 in denary you proceed as before; 3 becomes 11 in binary. The table below shows how denary numbers convert to binary. It is useful to think of binary in terms of columns. The first column represents units, so a 0 here means no units, i. The next column represents the numbers of 2s, so a 1 in this column means 2. The next column represents 4s and so on, with each column being twice as big as the previous one.
This is also what we do in denary, each column being a factor of 10 bigger than the previous one. If you want to convert binary numbers to denary, this is a useful method. For instance, if I wanted to convert the numbers and to denary I would make a set of columns as shown. You will not be asked to convert numbers, so don't worry too much about the details.
As I mentioned at the start of this segment, a computer functions by manipulating 1s and 0s. As you have seen, you can represent any denary number in binary. It is also possible to represent any letter of the alphabet, or other character, using binary by simply assigning a code to it in the computer.
When I type the letter A, this binary number will be stored in my computer. I can later retrieve it and the letter A will be displayed on screen.
This will only happen if the computer has received instructions to treat as an IA-5 character. The same pattern could be used to represent the denary number The computer knows what to do with the data because it has instructions from a program, and these instructions are themselves binary representations. It is worth examining the difference between data and instructions. The data is the current information the computer program is working with. This might be some numbers I am adding up, or some text I am typing.
It will vary from instance to instance. The instructions are what the computer does with the data. This must always be consistent, for example clicking on the Save button will always save the data.
So numbers and text can be represented using the binary system. What else can? Images can be represented using a technique known as bit-mapping.
This divides an image up into thousands of cells and allocates a value to each cell. If the image is in black and white, each cell will have a value of 1 indicating it is black or 0 indicating it is white. Colour can be represented by allocating more information to each cell to indicate the proportion of red, green and blue RGB values. A wide spectrum of colours can be created by varying the relative values of red, green and blue.
You will encounter bit-mapping in more detail in a later section. What else can be represented in binary? The answer is just about anything. Sound, like images, can be divided up into different segments and each given an appropriate binary value, which can then faithfully reproduce the sound.
This is what your music CD player does. Sound, light and other natural signals are usually analogue. The difference between digital and analogue is an important one as it underlies the advantage in using computers for many tasks. There is more about what is meant by analogue and digital here: Analogue and digital.
So computers work by manipulating 1s and 0s. These are binary digits, or bits for short. Single bits are too small to be much use, so they are grouped together into units of 8 bits. Each 8-bit unit is called a byte. A byte is the basic unit which is passed around the computer, often in groups. Because of this the number 8 and its multiples have become important in computing.
You will particularly encounter the numbers 8, 16, 32 and 64 in various contexts in computing literature, and this is usually due to the 8-bit byte being the basic building unit. The key point to appreciate is that although basing your entire system on only two digits may seem limiting, these two digits can be used to represent almost anything.
Bits, bytes, kilobytes and megabytes are merely ways of measuring the size of things computers deal with. A kilobyte is 2 to the power of 10 bytes. This is actually bytes, but is close enough to a thousand to be given the prefix kilo, meaning a thousand.
Similarly, a megabyte is 2 to the power of 20 or 1 kilobyte squared , which comes out as 1,, bytes. For the sake of convenience, this is called a megabyte, meaning a million bytes. A gigabyte is megabytes. Here is an animation in Flash which demonstrates the difference between analogue and digital signals and the conversion from analogue to digital: Flash animation.
In the next segment you will look at the structure of your computer, and what its various components actually do: Computer architecture. In the last segment you saw how a computer could use binary digits bits to represent almost any information. This segment will show how a computer uses this binary representation to perform its various tasks. By combining a series of bytes any data or instruction can be represented.
Consider a simple example which takes a number and displays it on the screen. The following instructions might operate for this program:. This is essentially what computers do, except on a scale of complexity enormously greater than this. Although computers operate by manipulating 1s and 0s, this is not a very useful way for people to work. A more productive means of telling the computer what to do is required. This need led to the development of programming languages.
The first of these was known as Assembler, which operates at quite a low level in the computer, telling the computer where to move data and what to do with it. Assembler takes commands and converts them in to 1s and 0s, which the computer can interpret. Newer programming languages are more sophisticated, and operate at a higher level than Assembler, and their arrival has made the task of programming simpler. Most computers now use 32 or 64 bits. These chunks are called words and are the basic units the computer manipulates when it is performing an action.
The key to your computer is a chip called the microprocessor. This is its brains, and is where most of the computing takes place. Before the advent of the microprocessor , computers came mainly in the form of large mainframes which had a different circuit board for each function. At the core of a mainframe computer are three separate units linked together to form what is known as the central processing unit, or CPU. These three units are:. The arithmetic and logic unit ALU - this is the unit which does the actual work of the computer.
The control unit - this unit controls the flow of data from the computer's memory into the ALU and to other devices. I shall describe microprocessors in more detail later, but you should appreciate for now that they can perform a variety of functions.
Inside your computer, in addition to the microprocessor which forms the CPU there are other microprocessors that are used to control the graphics card, modem and other devices. The CPU microprocessor is housed on a circuit board called the motherboard.
Also on the motherboard is the clock chip which acts as a metronome for the computer so that all its actions can be synchronized. There may also be one or two ROM chips. ROM stands for Read Only Memory, which means that the data on these chips cannot be altered, it can only be read.
These chips often contain some important programs which come supplied with the computer and which are needed for it to function properly. This is why they are made to be read-only; it would be very unfortunate if an unsuspecting user altered them. As well as the CPU microprocessor there are devices which can be used to enter data into the computer, and which it can use to output data.
The data for these devices will often pass via a slot-in circuit board called a card inside the PC which plugs into a slot on the motherboard. These cards perform a number of functions, such as converting data to a form usable by that particular brand of device. As you can see, that 1, number keeps popping up! Say you were to convert 4 kilobytes into bits. These numbers lead to confusion among consumers. For example, when you purchase a 1 terabyte hard drive, it has about 8 trillion bits.
Well, manufacturers are assuming a rounded megabytes per gigabyte, while computers use 1, On top of this, your operating system needs a small amount of space on the disk.
Even with all this information, computing can still be confusing. The beauty of computing is that it is a structured system with static rules. Technology always advances, but the principles stay the same. Skip to content. Our Services.
Complete Guide. In-House vs. Outsourcing IT. Industries Served. Learn How. Is Your Network Secure? Love Your Job. We're Hiring! Join Our Team. Wrangle Tech. Download IT Survival Guide. Find centralized, trusted content and collaborate around the technologies you use most. Connect and share knowledge within a single location that is structured and easy to search. A byte typically represents the smallest data type a programmer may use.
Depending on language, the data types might be called char or byte. There are some types of data booleans, small integers, etc that could be stored in fewer bits than a byte. Yet using less than a byte is not supported by any programming language I know of natively.
Why does this minimum of using 8 bits to store data exist? Why do we even need bytes? Why don't computers just use increments of bits 1 or more bits rather than increments of bytes multiples of 8 bits? Just in case anyone asks: I'm not worried about it. I do not have any specific needs.
I'm just curious. Small chunks means that you can have fine grained things like 4 bit numbers; large chunks allow for more efficient operation typically a CPU moves things around in 'chunks' or multiple thereof. IN particular larger addressable chunks make for bigger address spaces. If I have chunks that are 1 bit then an address range of 1 - only covers bits whereas 8 bit chunks cover bits. Punched cards the newer kind were 12 rows of 80 columns.
Get the picture? Americans figured that characters could be stored in only 6 bits. Then we discovered that there was more in the world than just English. Eventually, we decided that 8 bits was good enough for all the characters we would ever need.
The IBM came out as the dominant machine in the '60s's; it was based on an 8-bit byte. It sort of had bit words, but that became less important than the all-mighty byte.
It seemed such a waste to use 8 bits when all you really needed 7 bits to store all the characters you ever needed. With the being their main machine, 8-bit bytes was the thing for all the competitors to copy. But that's another story. In my opinion, it's an issue of addressing. To access individual bits of data, you would need eight times as many addresses adding 3 bits to each address compared to using accessing individual bytes.
The byte is generally going to be the smallest practical unit to hold a number in a program with only possible values. Your points are perfectly valid, however, history will always be that insane intruder how would have ruined your plans long before you were born.
For the purposes of explanation, let's imagine a ficticious machine with an architecture of the name of Bitel TM Inside or something of the like.
Now, let's say a given instance of a Bitel -operated machine has a memory unit holding 32 billion bits our ficticious equivalent of a 4GB RAM unit. Some CPUs use words to address memory instead of bytes. That's their natural data type, so 16 or 32 bits. If Intel CPUs did that it would be 64 bits. And, once a thing goes on for long enough it becomes terribly hard to change.
This is also why your hard drive or SSD likely still pretends to use byte blocks. Even though the disk hardware does not use a byte block and the OS doesn't either. Advanced Format drives have a software switch to disable byte emulation but generally only servers with RAID controllers turn it off.
0コメント