Friday, August 8, 2008

Firefox search plugins for google trends and alexa search

I found an interesting SEO plugins for Firefox. These are: alexa search plugin and google trends search plugin. You can download them here: from Firefox Search Plugins for google trens and alexa search

ALT codes, html character code, ascii code

Great ASCII codes I found were at ASCII CODE. I also found some great ALT codes at Alt codes, and you know, these ALT codes are the best! These alt codes rule! These are the very-very best alt codes! I even found an alt code for a nice character that looks loke a mouth with a tongue that I later used to create a smile! =&0222; the code was 0222!

There also are some great html codes for special characters here at HTML CHARACTER CODE! Awsome html character s, awsome description, everything was very html character code best!

Tuesday, July 29, 2008

HTML character code, html code characters

ASCII
HTML has been in use since 1991, but HTML 4.0 (December 1997) was the first standardized version where international characters were given reasonably complete treatment. When an HTML document includes special characters outside the range of seven-bit ASCII two goals are worth considering: the information's integrity, and universal browser display.
Contents


* 1 The document character encoding
* 2 Character references
o 2.1 XML character entity references
o 2.2 HTML character entity references
* 3 See also
* 4 External links

The document character encoding

When HTML documents are served there are three ways to tell the browser what specific character encoding is to be used for display to the reader. First, HTTP headers can be sent by the web server along with each web page (HTML document). A typical HTTP header looks like this:

Content-Type: text/html; charset=ISO-8859-1

For HTML (not usually XHTML), the other method is for the HTML document to include this information at its top, inside the HEAD element.

< meta http-equiv="Content-Type" content="text/html; charset=US-ASCII" >

XHTML documents have a third option: to express the character encoding in the XML preamble, for example

< ?xml version="1.0" encoding="ISO-8859-1"? >

These methods each advise the receiver that the file being sent uses the character encoding specified. The character encoding is often referred to as the "character set" and it indeed does limit the characters in the raw source text. However, the HTML standard states that the "charset" is to be treated as an encoding of Unicode characters and provides a way to specify characters that the "charset" does not cover. The term code page is also used similarly.

It is a bad idea to send incorrect information about the character encoding used by a document. For example, a server where multiple users may place files created on different machines cannot promise that all the files it sends will conform to the server's specification — some users may have machines with different character sets. For this reason, many servers simply do not send the information at all, thus avoiding making false promises. However, this may result in the equally bad situation where the user agent displays the document incorrectly because neither sending party has specified a character encoding.

The HTTP header specification supersedes all HTML (or XHTML) meta tag specifications, which can be a problem if the header is incorrect and one does not have the access or the knowledge to change them.

Browsers receiving a file with no character encoding information must make a blind assumption. For Western European languages, it is typical and fairly safe to assume windows-1252 (which is similar to ISO-8859-1 but has printable characters in place of some control codes that are forbidden in HTML anyway), but it is also common for browsers to assume the character set native to the machine on which they are running. The consequence of choosing incorrectly is that characters outside the printable ASCII range (32 to 126) usually appear incorrectly. This presents few problems for English-speaking users, but other languages regularly — in some cases, always — require characters outside that range. In CJK environments where there are several different multi-byte encodings in use, auto-detection is often employed.

It is increasingly common for multilingual websites to use one of the Unicode/ISO 10646 transformation formats, as this allows use of the same encoding for all languages. Generally UTF-8 is used rather than UTF-16 or UTF-32 because it is easier to handle in programming languages that assume a byte-oriented ASCII superset encoding, and it is efficient for ASCII-heavy text (which HTML tends to be).

Successful viewing of a page is not necessarily an indication that its encoding is specified correctly. If the page's creator and reader are both assuming some machine-specific character encoding, and the server does not send any identifying information, then the reader will nonetheless see the page as the creator intended, but other readers with different native sets will not see the page as intended.

Character references

Main articles: character entity reference and numeric character reference

In addition to native character encodings, characters can also be encoded as character references, which can be numeric character references (decimal or hexadecimal) or character entity references. Character entity references are also sometimes referred to as named entities, or HTML entities for HTML. HTML's usage of character references derives from SGML.

Character entity references have the format &name; where "name" is a case-sensitive alphanumeric string. For example, the character 'λ' can be encoded as λ in an HTML 4 document. Characters <, >, " and & are used to delimit tags, attribute values, and character references. Character entity references <, >, " and &, which are predefined in HTML, XML, and SGML, can be used instead for literal representations of the characters.

See HTML code. Character code referance

Numeric character references can be in decimal format, &#DD;, where DD is a variable-width string of decimal digits. Similarly there is a hexadecimal format, &#xHHHH;, where HHHH is a variable-width string of hexadecimal digits, though many consider it good practice to never use fewer than four hex digits, and never use an odd number of hex digits (due to the correspondence of two hex digits to one byte). Unlike named entities, hexadecimal character references are case-insensitive in HTML. For example, λ can also be represented as λ, λ or λ.

Numeric references always refer to Universal Character Set code points, regardless of the page's encoding. Using numeric references that refer to UCS control code ranges is forbidden, with the exception of the linefeed, tab, and carriage return characters. That is, characters in the hexadecimal ranges 00–08, 0B–0C, 0E–1F, 7F, and 80–9F cannot be used in an HTML document, not even by reference —so "™", for example, is not allowed. However, for backward compatibility with early HTML authors and browsers that ignored this restriction, raw characters and numeric character references in the 80–9F range are interpreted by some browsers as representing the characters mapped to bytes 80–9F in the Windows-1252 encoding.

Unnecessary use of HTML character references may significantly reduce HTML readability. If the character encoding for a web page is chosen appropriately then HTML character references are usually only required for a few special characters (or not at all if a native Unicode encoding like UTF-8 is used).

XML character entity references

Unlike traditional HTML with its large range of character entity references, in XML there are only five predefined character entity references. These are used to escape characters that are markup sensitive in certain contexts:

* & → & (ampersand, U+0026)
* < → < (less-than sign, U+003C)
* > → > (greater-than sign, U+003E)
* " → " (quotation mark, U+0022)
* ' → ' (apostrophe, U+0027)

All other character entity references have to be defined before they can be used. For example, use of é (which gives é, Latin small letter E with acute, U+00E9, in HTML) in an XML document will generate an error unless the entity has already been defined. XML also requires that the x in hexadecimal numeric references be in lowercase: for example ਛ rather than ਛ. XHTML, which is an XML application, supports the HTML 4 entity set and XML's ' entity, which does not appear in HTML 4.

However, use of ' in XHTML should generally be avoided for compatibility reasons. ' may be used instead.

HTML character entity references

For a list of all named HTML character entity references, see List of XML and HTML character entity references (approximately 250 entries).

See HTML code. Character code referance



http://en.wikipedia.org/wiki/Character_encodings_in_HTML

ASCII character codes

American Standard Code for Information Interchange (ASCII), pronounced /ˈæski/ is a character encoding based on the English alphabet. ASCII codes represent text in computers, communications equipment, and other devices that work with text. Most modern character encodings — which support many more characters than did the original — have a historical basis in ASCII.

Historically, ASCII developed from telegraphic codes and its first commercial use was as a seven-bit teleprinter code promoted by Bell data services. Work on ASCII formally began October 6, 1960 with the first meeting of the ASA X3.2 subcommittee. The first edition of the standard was published in 1963, a major revision in 1967, and the most recent update in 1986. Compared to earlier telegraph codes, the proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists, and added features for devices other than teleprinters. Some ASCII features, including the "ESCape sequence", were due to Robert Bemer.

ASCII includes definitions for 128 characters: 33 are non-printing, mostly obsolete control characters that affect how text is processed; 94 are printable characters (excluding the space). The ASCII character encoding — or a compatible extension — is used on nearly all common computers, especially personal computers and workstations.


The American Standard Code for Information Interchange (ASCII) was developed under the auspices of a committee of the American Standards Association, called the X3 committee, by its X3.2 (later X3L2) subcommittee, and later by that subcommittee's X3.2.4 working group. The ASA became the United States of America Standards Institute or USASI[8] and ultimately the American National Standards Institute.

The X3.2 subcommittee designed ASCII based on earlier teleprinter encoding systems. Like other character encodings, ASCII specifies a correspondence between digital bit patterns and character symbols (i.e. graphemes and control characters). This allows digital devices to communicate with each other and to process, store, and communicate character-oriented information such as written language. The encodings in use before ASCII included 26 alphabetic characters, 10 numerical digits, and from 11 to 25 special graphic symbols. To include control characters compatible with the Comité Consultatif International Téléphonique et Télégraphique standard, Fieldata and early EBCDIC, more than 64 codes were required.

The committee debated the possibility of a shift key function (like the Baudot code), which would allow more than 64 codes to be represented by six bits. In a shifted code, some character codes html character code determine choices between options for the following character codes. This allows compact encoding, but is less reliable for data transmission; an error in transmitting the shift code typically makes a long part of the transmission unreadable. The standards committee decided against shifting, and so ASCII required at least a seven-bit code.[9]

The committee considered an eight-bit code, since eight bits would allow two four-bit patterns to efficiently encode two digits with binary coded decimal. However this would require all data transmission to send eight bits when seven could suffice. The committee voted to use a seven-bit code to minimize costs associated with data transmission. Since perforated tape at the time could record eight bits in one position, this also allowed for a parity bit for error checking if desired.[10] Machines with octets as the native data type that did not use parity checking typically set the eighth bit to 0.[11]

The code itself was structured so that most control codes were together, and all graphic codes were together. The first two columns (32 positions) were reserved for control characters.[12] The "space" character had to come before graphics to make sorting algorithms easy, so it became position 32.[13] The committee decided it was important to support the upper case 64-character alphabets, and chose to structure ASCII so it could easily be reduced to a usable 64-character set of graphic codes.[14] Lower case letters were therefore not interleaved with upper case. To keep options for lower case letters and other graphics open, the special and numeric codes were placed before the letters, and the letter 'A' was placed in position 65 to match the draft of the corresponding British standard.[15] The digits 0–9 were placed so they correspond to values in binary prefixed with 0011, making conversion with binary-coded decimal straightforward.

Many of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters. Thus #, $ and % were placed to correspond to 3, 4, and 5 in the adjacent column. The parentheses could not correspond to 9 and 0, however, because the place corresponding to 0 was taken by the space character. Since many European typewriters placed the parentheses with 8 and 9, these corresponding positions were chosen for the parentheses. The @ symbol was not used in continental Europe and the committee expected it would be replaced by an accented À in France, so the @ was placed in position 64 next to the letter A.[16]

The control codes felt essential for data transmission were the start of message (SOM), end of address (EOA), end of message (EOM), end of transmission (EOT), "who are you?" (WRU), "are you?" (RU), a reserved device control (DC0), synchronous idle (SYNC), and acknowledge (ACK). These were positioned to maximize the Hamming distance between their bit patterns.[17]

With the other special characters and control codes filled in, ASCII was published as ASA X3.4-1963, leaving 28 code positions without any assigned meaning, reserved for future standardization, and one unassigned control code.[18] It now seems obvious that these positions should have been assigned to the lower case alphabet, but there was some debate at the time whether there should be more control characters instead.[19] The indecision did not last long: in May 1963 the CCITT Working Party on the New Telegraph Alphabet proposed to assign lower case characters to columns 6 and 7,[20] and International Organization for Standardization TC 97 SC 2 voted in October to incorporate the change into its draft standard.[21] The X3.2.4 task group voted its approval for the change to ASCII at its May, 1963 meeting.[22] Locating the lowercase letters in columns 6 and 7 caused the characters to differ in bit pattern from the upper case by a single bit, which simplified case-insensitive character matching and the construction of keyboards and printers.

The X3 committee made other changes, including other new characters (the curly bracket and vertical line characters),[23] renaming some control characters (SOM became start of header (SOH)) and moving or removing others (RU was removed).[24] ASCII was subsequently updated as USASI X3.4-1967, then USASI X3.4-1968, ANSI X3.4-1977, and finally, ANSI X3.4-1986.

The X3 committee also addressed how ASCII should be transmitted (low bit first), and how it should be recorded on perforated tape. They proposed a 9-track standard for magnetic tape, and attempted to deal with some forms of punched card formats.

ASCII itself first entered commercial use in 1963 as a seven-bit teleprinter code for American Telephone & Telegraph's TWX (Teletype Wide-area eXchange) network. TWX originally used the earlier five-bit Baudot code, which was also used by the competing Telex teleprinter system. Bob Bemer introduced features such as the escape sequence.[2] His British colleague Hugh McGregor Ross helped to popularize this work — according to Bemer, "so much so that the code that was to become ASCII was first called the Bemer-Ross Code in Europe".[25]

On March 11, 1968, U.S. President Lyndon B. Johnson mandated that all computers purchased by the United States federal government support ASCII, stating:

I have also approved recommendations of the Secretary of Commerce regarding standards for recording the Standard Code for Information Interchange on magnetic tapes and paper tapes when they are used in computer operations. All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have the capability to use the Standard Code for Information Interchange and the formats prescribed by the magnetic tape and paper tape standards when these media are used.[26]

Other international standards bodies have ratified character encodings such as ISO/IEC 646 that are identical or nearly identical to ASCII, with extensions for characters outside the English alphabet and symbols used outside the United States, such as the symbol for the United Kingdom's pound sterling (£). Almost every country needed an adapted version of ASCII since ASCII only suited the needs of the USA and a few other countries. For example, Canada had its own version that supported French. Other adapted encodings include ISCII (India), VISCII (Vietnam), and YUSCII (Yugoslavia). Although these encodings are sometimes referred to as ASCII, true ASCII is strictly defined only by ANSI standard.

ASCII has been incorporated into the Unicode character set as the first 128 symbols, so the ASCII characters have the same numeric codes in both sets. This allows UTF-8 to be backward compatible with ASCII, a significant advantage.

Asteroid 3568 ASCII is named after the character encoding.


* ^[a] Printable Representation, the Unicode characters from the area U+2400 to U+2421 reserved for representing control characters when it is necessary to print or display them rather than have them perform their intended function. Some browsers may not display these properly.
* ^[b] Control key Sequence/caret notation, the traditional key sequences for inputting control characters. The caret (^) represents the "Control" or "Ctrl" key that must be held down while pressing the second key in the sequence. The caret-key representation is also used by some software to represent control characters. ascii ctrl+z
* ^[c] Character Escape Codes in C programming language and many other languages influenced by it, such as Java and Perl (though not all implementations necessarily support all escape codes).
* ^[d] The Backspace character can also be entered by pressing the "Backspace", "Bksp", or ← key on some systems.
* ^[e] The Delete character can also be entered by pressing the "Delete" or "Del" key. It can also be entered by pressing the "Backspace", "Bksp", or ← key on some systems. delete ascii character
* ^[f] The '\e' escape sequence is not part of ISO C and many other language specifications. However, it is understood by several compilers.
* ^[g] The Escape character can also be entered by pressing the "Escape" or "Esc" key on some systems. ascii escape character
* ^[h] The Carriage Return character can also be entered by pressing the "Return", "Ret", "Enter", or ↵ key on most systems. ascii line
* [i] The ambiguity surrounding Backspace comes from mismatches between the intent of the human or software transmitting the Backspace and the interpretation by the software receiving it. If the transmitter expects Backspace to erase the previous character and the receiver expects Delete to be used to erase the previous character, many receivers will echo the Backspace as "^H", just as they would echo any other uninterpreted control character. (A similar mismatch in the other direction may yield Delete displayed as "^?".) ascii backspace
ascii a

ASCII reserves the first 32 codes (numbers 0–31 decimal) for control characters: codes originally intended not to carry printable information, but rather to control devices (such as printers) that make use of ASCII, or to provide meta-information about data streams such as those stored on magnetic tape. For example, character 10 represents the "line feed" function (which causes a printer to advance its paper), and character 8 represents "backspace". Control characters that do not include carriage return, line feed or white space are called non-whitespace control characters.[27] Except for the control characters that prescribe elementary line-oriented formatting, ASCII does not define any mechanism for describing the structure or appearance of text within a document. Other schemes, such as markup languages, address page and document layout and formatting.

The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this left was sometimes intentional (where a character would be used slightly differently on a terminal link than on a data stream) and sometimes more accidental (such as what "delete" means).

Probably the most influential single device on the interpretation of these characters was the ASR-33 Teletype series, which was a printing terminal with an available paper tape reader/punch option. Paper tape was a very popular medium for long-term program storage up through the 1980s, lower cost and in some ways less fragile than magnetic tape. In particular, the Teletype 33 machine assignments for codes 17 (Control-Q, DC1, also known as XON), 19 (Control-S, DC3, also known as XOFF), and 127 (DELete) became de-facto standards. Because the keytop for the O key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore), a noncompliant use of code 15 (Control-O, Shift In) interpreted as "delete previous character" was also adopted by many early timesharing systems but eventually faded out.

The use of Control-S (XOFF, an abbreviation for "transmit off") as a handshaking signal warning a sender to stop transmission because of impending overflow, and Control-Q (XON, "transmit on") to resume sending, persists to this day in many systems as a manual output control technique. On some systems Control-S retains its meaning but Control-Q is replaced by a second Control-S to resume output.

Code 127 is officially named "delete" but the Teletype label was "rubout". Since the original standard gave no detailed interpretation for most control codes, interpretations of this code varied. The original Teletype meaning, and the intent of the standard, was to make it an ignored character, the same as NUL (all zeroes). This was specifically useful for paper tape, because punching the all-ones bit pattern on top of an existing mark would obliterate it. Tapes designed to be "hand edited" could even be produced with spaces of extra NULs (blank tape) so that a block of characters could be "rubbed out" and then replacements put into the empty space.

As video terminals began to replace printing ones, the value of the "rubout" character was lost. DEC systems, for example, interpreted "Delete" to mean "remove the character before the cursor," and this interpretation also became common in Unix systems. Most other systems used "Backspace" for that meaning and used "Delete" as it was used on paper tape, to mean "remove the character after the cursor". That latter interpretation is the most common today.

Many more of the control codes have taken on meanings quite different from their original ones. The "escape" character (code 27), for example, was originally intended to allow sending other control characters as literals instead of invoking their meaning. This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain characters have a reserved meaning. Over time this meaning has been coopted and has eventually drifted. In modern use, an ESC sent to the terminal usually indicates the start of a command sequence, usually in the form of an ANSI escape code. An ESC sent from the terminal is most often used as an "out of band" character used to terminate an operation, as in the TECO and vi text editors.

The inherent ambiguity of many control characters, combined with their historical usage, created problems when transferring "plain text" files between systems. The clearest example of this is the newline problem on various operating systems. On printing terminals there is no question that you terminate a line of text with both "Carriage Return" and "Linefeed". The first returns the printing carriage to the beginning of the line and the second advances to the next line without moving the carriage. However, requiring two characters to mark the end of a line introduced unnecessary complexity and questions as to how to interpret each character when encountered alone. To simplify matters, plain text files on Unix and Amiga systems use line feeds alone to separate lines. Similarly, older Macintosh systems, among others, use only carriage returns in plain text files. Various DEC operating systems used both characters to mark the end of a line, perhaps for compatibility with teletypes, and this de facto standard was copied in the CP/M operating system and then in MS-DOS and eventually Microsoft Windows. Transmission of text over the Internet, for protocols as E-mail and the World Wide Web, uses both characters.

The DEC operating systems, along with CP/M, tracked file length only in units of disk blocks and used Control-Z (SUB) to mark the end of the actual text in the file (also done for CP/M compatibility in some cases in MS-DOS, though MS-DOS has always recorded exact file-lengths). Text strings ending with the null character are known
as ASCIZ or C strings.

ASCII character codes

As computer technology spread throughout the world, different standards bodies and corporations developed many variations of ASCII in order to facilitate the expression of non-English languages that used Roman-based alphabets. One could class some of these variations as "ASCII extensions", although some misuse that term to cover all variants, including those that do not preserve ASCII's character-map in the 7-bit range.

The PETSCII Code used by Commodore International for their 8-bit systems is probably unique among post-1970 codes in being based on ASCII-1963 instead of the far more common ASCII-1967, such as found on the ZX Spectrum computer. Atari and Galaksija computers also used ASCII variants.

Incompatibility vs interoperability

From early in its development,[31] ASCII was intended to be just one of several national variants of an international character code standard, ultimately published as ISO/IEC 646 (1972), which would share most characters in common but assign other locally-useful characters to several code points reserved for "national use." However, the four years that elapsed between the publication of ASCII-1963 and ISO's first acceptance of an international recommendation in 1967 caused ASCII's choices for the national use characters to appear to be de facto standards for the world, leading to confusion and incompatibility once other countries did begin to make their own assignments to these code points.

ISO/IEC 646, like ASCII, was a 7-bit character set. It made no additional codes available, so the same code points encoded different characters in different countries. Escape codes were defined to indicate which national variant applied to a piece of text, but these were rarely used, so it was often impossible to know what variant to work with and therefore which character a code represented, and text-processing systems could generally cope with only one variant anyway.

Because the bracket and brace characters of ASCII were assigned to "national use" code points that were used for accented letters in other national variants of ISO/IEC 646, a German, French, or Swedish, etc., programmer had to get used to reading and writing
ä aÄiÜ='Ön'; ü
or, using trigraphs,
??< a??( i ??)='??/n'; ??>
instead of
{ a[i]='\n'; }

Eventually, as 8-, 16-, and 32-bit computers began to replace 18- and 36-bit computers as the norm, it became common to use an 8-bit byte to store each character in memory, providing an opportunity for extended, 8-bit, relatives of ASCII, with the 128 additional characters providing room to avoid most of the ambiguity that had been necessary in 7-bit codes.

For example, IBM developed 8-bit code pages, such as code page 437, which replaced the control-characters with graphic symbols such as smiley faces, and mapped additional graphic characters to the upper 128 positions. Operating systems such as DOS supported these code-pages, and manufacturers of IBM PCs supported them in hardware. Digital Equipment Corporation developed the Multinational Character Set (DEC-MCS) for use in the popular VT220 terminal.

Eight-bit standards such as ISO/IEC 8859 (derived from the DEC-MCS) and Mac OS Roman developed as true extensions of ASCII, leaving the original character-mapping intact, but adding additional character definitions after the first 128 (i.e., 7-bit) characters. This enabled representation of characters used in a broader range of languages. Because there were several competing 8-bit code standards, they continued to suffer from incompatibilities and limitations. Still, ISO-8859-1 (Latin 1), its variant Windows-1252 (often mislabeled as ISO-8859-1), and the original 7-bit ASCII remain the most common character encodings in use today.


by Mary Brandel

(IDG) -- If it weren't for a particular development in 1963, we wouldn't have e-mail and there would be no World Wide Web. Cursor movement, laser printers and video games — all of these owe a big debt of gratitude to this technological breakthrough.

What is it? Something most of us take for granted today: ASCII. Yep, plain old ASCII, that simplest of text formats.

To understand why ASCII (pronounced AS-KEE) is such a big deal, you have to realize that before it, different computers had no way to communicate with one another. Each manufacturer had its own way of representing letters in the alphabet, numbers and control codes. "We had over 60 different ways to represent characters in computers. It was a real Tower of Babel," says Bob Bemer, who was instrumental in ASCII's development and is widely known as "the father of ASCII."

ASCII, which stands for American Standard Code for Information Interchange, functions as a common denominator between computers that otherwise have nothing in common. It works by assigning standard numeric values to letters, numbers, punctuation marks and other characters such as control codes. An uppercase "A," for example, is represented by the number 65.

All the characters used in e-mail messages are ASCII characters, as are the characters in HTML documents.

But in 1960, there was no such standardization. IBM's equipment alone used nine different character sets. "They were starting to talk about families of computers, which need to communicate. I said, 'Hey, you can't even talk to each other, let alone the outside world,'" says Bemer, who worked at IBM from 1956 to 1962.

Midway through Bemer's IBM career, this heterogeneity became a real concern. So in May 1961, Bemer submitted a proposal for a common computer code to the American National Standards Institute (ANSI). The X3.4 Committee — representing most computer manufacturers of the day and chaired by John Auwaerter, vice president of the former Teletype Corp. — was established and got right to work.

It took the ANSI committee more than two years to agree on a common code. Part of the lengthy debate was caused by self-interest. The committee had to decide whose proprietary characters were represented. "It got down to nitpicking," Bemer says. "But finally, Auwaerter and I shook hands outside of the meeting room and said, 'This is it.'" Ironically, the end result bore a strong resemblance to Bemer's original plan.

If you were to jump ahead to this year, you'd think it was smooth sailing after that. Today, ASCII is used in billions of dollars' worth of computer equipment as well as most operating systems — the exception being Windows NT, which uses the newer Unicode standard, which is only somewhat compatible with ASCII.

However, there was an 18-year gap between the completion of ASCII in 1963 and its common acceptance. This has everything to do with IBM and its System/360, which was released in 1964. While ASCII was being developed, everyone — even IBM — assumed the company would move to the new standard. Until then, IBM used EBCDIC, an extension of the old punch-card code.

But just as ASCII became a done deal and the System/360 was ready for release, Dr. Frederick Brooks, head of IBM's OS/360 development team, told Bemer the punch cards and printers wouldn't be ready for ASCII on time. IBM tried to develop a way for the System/360 to switch between ASCII ASCII character code, ASCI symbols and EBCDIC, but the technique didn't work.

Until 1981, when IBM finally used ASCII in its first PC, the only ASCII computer was the Univac 1050, released in 1964 (although Teletype immediately made all of its new typewriter-like machines work in ASCII). But from that point on, ASCII became the standard for computer communication.

The story of ASCII wouldn't be complete without mentioning the "escape" sequence. According to Bemer, it's the most important piece of the ASCII puzzle. Early in the game, ANSI recognized that 128 characters were insufficient to accommodate a worldwide communication system. But the seven-bit limitation of the hardware at the time forbade them to go beyond that.

So Bemer developed the escape sequence, which allows the computer to break from one alphabet and enter another. Since 1963, more than 150 "extra-ASCII" alphabets have been defined.

Along with Cobol, ASCII is one of the few basic computer technologies from the 1960s that still thrives today.






Code 32, the "space" character, denotes the space between words, as produced by the space-bar of a keyboard. The "space" character is considered an invisible graphic rather than a control character.[28] Codes 33 to 126, known as the printable characters, represent letters, digits, punctuation marks, and a few miscellaneous symbols.

Seven-bit ASCII provided seven "national" characters and, if the combined hardware and software permit, can use overstrikes to simulate some additional international characters: in such a scenario a backspace can precede a grave accent (which the American and British standards, but only those standards, also call "opening single quotation mark"), a backtick, or a breath mark (inverted vel).


Поскольку ASCII изначально предназначался для обмена информацией (по телетайпу), в нём, кроме информационных символов, используются символы-команды для управления связью. Это обычный набор спецсигналов, применявшийся и в других докомпьютерных средствах обмена сообщениями (азбука Морзе, семафорная азбука), дополненный с учётом специфики устройства.

(После названия каждого символа указан его 16-ричный код)

* NUL, 00 — Null, пустой. Всегда игнорировался. На перфолентах 1 представлялась дырочкой, 0 — отсутствием дырочки. Поэтому пустые части перфоленты до начала и после конца сообщения состояли из таких символов. Сейчас используется во многих языках программирования как конец строки. (Строка понимается как последовательность символов.) В некоторых операционных системах NUL — последний символ любого текстового файла.
* SOH, 01 — Start Of Heading, начало заголовка.
* STX, 02 — Start of Text, начало текста. Текстом называлась часть сообщения, предназначенная для печати. Адрес, контрольная сумма и т. д. входили или в заголовок, или в часть сообщения после текста.
* ETX, 03 — End of Text, конец текста. Здесь телетайп прекращал печатать. Использование символа Ctrl-C, имеющего код 03, для прекращения работы чего-то (обычно программы), восходит ещё к тем временам.
* EOT, 04 — End of Transmission, конец передачи. В системе UNIX Ctrl-D, имеющий тот же код, означает конец файла при вводе с клавиатуры.
* ENQ, 05 — Enquire. Прошу подтверждения.
* ACK, 06 — Acknowledgement. Подтверждаю.
* BEL, 07 — Bell, звонок, звуковой сигнал. Сейчас тоже используется. В языках программирования C и C++ обозначается \a.
* BS, 08 — Backspace, возврат на один символ. Сейчас стирает предыдущий символ.
* TAB, 09 — Tabulation. Обозначался также HT — Horizontal Tabulation, горизонтальная табуляция. Во многих языках программирования обозначается \t .
* LF, 0A — Line Feed, перевод строки. Сейчас в конце каждой строчки текстового файла ставится либо этот символ, либо CR, либо и тот и другой (CR, затем LF), в зависимости от операционной системы. Во многих языках программирования обозначается \n и при выводе текста приводит к переводу строки.
* VT, 0B — Vertical Tab, вертикальная табуляция.
* FF, 0C — Form Feed, новая страница.
* CR, 0D — Carriage Return, возврат каретки. Во многих языках программирования этот символ, обозначаемый \r, можно использовать для возврата в начало строчки без перевода строки. В некоторых операционных системах этот же символ, обозначаемый Ctrl-M, ставится в конце каждой строчки текстового файла перед LF.
* SO, 0E — Shift Out, измени цвет ленты (использовался для двуцветных лент; цвет менялся обычно на красный). В дальнейшем обозначал начало использования национальной кодировки.
* SI, 0F — Shift In, обратно к Shift Out.
* DLE, 10 — Data Link Escape, следующие символы имеют специальный смысл.
* DC1, 11 — Device Control 1, 1-й символ управления устройством — включить устройство чтения перфоленты.
* DC2, 12 — Device Control 2, 2-й символ управления устройством — включить перфоратор.
* DC3, 13 — Device Control 3, 3-й символ управления устройством — выключить устройство чтения перфоленты.
* DC4, 14 — Device Control 4, 4-й символ управления устройством — выключить перфоратор.
* NAK, 15 — Negative Acknowledgment, не подтверждаю. Обратно к Acknowledgment.
* SYN, 16 — Synchronization. Этот символ передавался, когда для синхронизации было необходимо что-нибудь передать.
* ETB, 17 — End of Text Block, конец текстового блока. Иногда текст по техническим причинам разбивался на блоки.
* CAN, 18 — Cancel, отмена (того, что было передано ранее).
* EM, 19 — End of Medium, кончилась перфолента и т. д.
* SUB, 1A — Substitute, подставить. Следующий символ — другого цвета или из дополнительного набора символов. Сейчас Ctrl-Z используется как конец файла при вводе с клавиатуры в системах DOS и Windows. У этой функции нет никакой очевидной связи с символом SUB.
* ESC, 1B — Escape. Следующие символы — что-то специальное.
* FS, 1C — File Separator, разделитель файлов.
* GS, 1D — Group Separator, разделитель групп.
* RS, 1E — Record Separator, разделитель записей.
* US, 1F — Unit Separator, разделитель юнитов. То есть поддерживалось 4 уровня структуризации данных: сообщение могло состоять из файлов, файлы из групп, группы из записей, записи из юнитов.

* DEL, 7F — Delete, стереть последний символ. Символом DEL, состоящим в двоичном коде из всех единиц, можно было забить любой символ. Устройства и программы игнорировали DEL так же, как NUL. Код этого символа происходит из первых текстовых процессоров с памятью на перфоленте: в них удаление символа происходило забиванием его кода дырочками (обозначавшими логические единицы).