Byte

bytesBTBKBMBterabytes48 bits8-bit bytes8-bytebyte-level
This article is about the unit of information.wikipedia
730 Related Articles

Units of information

unit of informationdecletdibit
The byte is a unit of digital information that most commonly consists of eight bits.
The most commonly used units of data storage capacity are the bit, the capacity of a system that has only two states, and the byte (or octet), which is equivalent to eight bits.

Bit

bitsbinary digitbinary digits
The byte is a unit of digital information that most commonly consists of eight bits.
A group of eight binary digits is commonly called one byte, but historically the size of the byte is not strictly defined.

Syllable (computing)

syllable13-bit syllable8-bit syllable
In this era, bit groupings in the instruction stream were often referred to as syllables, before the term byte became common.
Commonly used in the 1960s and 1970s, the term has mostly fallen into disuse in favour of terms like byte or word.

Power of two

powers of twopower of 2powers of 2
The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte—2 to the power 8 is 256.
As an example, a video game running on an 8-bit system might limit the score or the number of items the player can hold to 255—the result of using a byte, which is 8 bits long, to store the number, giving a maximum value of

Word (computer architecture)

wordwordsword size
These systems often had memory words of 12, 24, 36, 48, or 60 bits, corresponding to 2, 4, 6, 8, or 10 six-bit bytes.
After the introduction of the IBM System/360 design, which used eight-bit characters and supported lower-case letters, the standard size of a character (or more accurately, a byte) became eight bits.

Werner Buchholz

The term byte was coined by Werner Buchholz in June 1956, during the early design phase for the IBM Stretch computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction.
In June 1956, he coined the term "byte" for a unit of digital information.

Octet (computing)

octetoctets8-bit
Internationally, the unit octet, symbol o, explicitly defines a sequence of eight bits, eliminating the ambiguity of the byte.
The term is often used when the term byte might be ambiguous, as the byte has historically been used for storage units of a variety of sizes.

Binary-coded decimal

BCDbinary coded decimalpacked decimal
Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army (FIELDATA) and Navy.
In byte-oriented systems (i.e. most modern computers), the term unpacked BCD usually implies a full byte for each digit (often including a sign), whereas packed BCD typically encodes two decimal digits within a single byte by taking advantage of the fact that four bits are enough to represent the range 0 to 9.

Six-bit character code

6-bitDEC SIXBITECMA-1
Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army (FIELDATA) and Navy. The six-bit character code was an often used implementation in early encoding systems and computers using six-bit and nine-bit bytes were common in the 1960s.
This encoding was replaced by the 8-bit EBCDIC code when System/360 standardized on 8-bit bytes.

Character (computing)

charactercharacterstext
Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures.
A char in the C programming language is a data type with the size of exactly one byte, which in turn is defined to be large enough to contain any member of the “basic execution character set”.

ISO/IEC 80000

IEC 80000-13ISO 80000-2ISO 80000-3
The international standard IEC 80000-13 codified this common meaning. The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format as the upper-case character B.
The Standard also includes definitions for units relating to information technology, such as the erlang (E), bit (bit), octet (o), byte (B), baud (Bd), shannon (Sh), hartley (Hart) and the natural unit of information (nat).

IBM 7030 Stretch

IBM 7030STRETCHIBM Stretch
The term byte was coined by Werner Buchholz in June 1956, during the early design phase for the IBM Stretch computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction.
Multiprogramming, memory protection, generalized interrupts, the eight-bit byte for I/O

Hexadecimal

hex0x16
A four-bit quantity is often called a nibble, also nybble, which is conveniently represented by a single hexadecimal digit.
Each hexadecimal digit represents four binary digits, also known as a nibble, which is half a byte.

IBM AN/FSQ-31 SAC Data Processing System

AN/FSQ-31VAN/FSQ-31
Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31.
A 6-bit byte, as opposed to the 8-bit byte in common use today, was common in IBM and other scientific computers of the time.

IBM System/360

System/360IBM 360IBM/360
During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations used in earlier card punches.

Nibble

nybblehalf-byte4 bits
A four-bit quantity is often called a nibble, also nybble, which is conveniently represented by a single hexadecimal digit. Another origin of byte for bit groups smaller than a computers's word size, and in particular groups of four bits, is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand, MIT, and IBM.
In computing, a nibble (occasionally nybble or nyble to match the spelling of byte) is a four-bit aggregation, or half an octet.

Binary prefix

gibikibitebi
In some fields of the software and computer hardware industries a binary prefix is used for bytes and bits, while producers of computer storage devices practice adherence to decimal SI multiples.
A binary prefix is a unit prefix for multiples of units in data processing, data transmission, and digital information, notably the bit and the byte, to indicate multiplication by a power of 2.

Data type

typedatatypetypes
Many programming languages defined the data type byte.
A color, on the other hand, might be represented by three bytes denoting the amounts each of red, green, and blue, and a string representing the color's name.

IEEE 1541-2002

IEEE 1541IEEE 1541-2002 standard
The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format as the upper-case character B.
Moreover, there is not a consistent use of the symbols to indicate quantities of bits and bytes – the unit symbol "Mb", for instance, has been widely used for both megabytes and megabits.

C (programming language)

CC programming languageC language
The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard).
However, few utilities were ultimately written in B because it was too slow, and B could not take advantage of PDP-11 features such as byte addressability.

Kilo-

kilokkilos
While the numerical difference between the decimal and binary interpretations is relatively small for the prefixes kilo and mega, it grows to over 20% for prefix yotta.

Data hierarchy

hierarchy
In terms of data storage, data fields are made of bytes and these in turn are made up of bits.

Ternary numeral system

ternarytrit3
Some ternary computers such as the Setun defined a tryte to be six trits or approximately 9.5 bits (holding more information than the de facto binary byte).

Address space

addressaddressableaddressed
Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures.