Unicode » History » Version 1
Gregg -, 09/09/2009 03:25 AM
| 1 | 1 | Gregg - | Background: |
|---|---|---|---|
| 2 | |||
| 3 | * See [http://unicode.org/reports/tr17/ UTR 17, Unicode Character Encoding Model] - if you're brave enough to tackle the mysteries of CCSs, CEFs, CESs, etc. |
||
| 4 | * See also the [http://site.icu-project.org/ ICU] page for lots of detailed documentation on how Unicode is supposed to work in running software |
||
| 5 | * There are three "encoding" forms, UTF-8, UTF-16, and UTF-32; there are also UCS-2 and UCS-4. |
||
| 6 | * JSON must be unicode |
||
| 7 | * The default encoding form of JSON is utf-8 unicode, which effectively means it must be supported, but JSON data can also be delivered in the other two forms |
||
| 8 | * SPARQL syntax is UTF-8 Unicode: "The encoding is always UTF-8 [RFC3629]. Unicode code points may also be expressed using an \uXXXX (U+0 to U+FFFF) or \UXXXXXXXX syntax (for U+10000 onwards) where X is a hexadecimal digit [0-9A-F]". In other words, the SPARQL must detect and reject non-utf-8. But it isn't clear if a conformant SPARQL parser ''must'' accept unicode expressed with escapes (which is essentially utf-7). |
||
| 9 | |||
| 10 | |||
| 11 | Requirements: |
||
| 12 | |||
| 13 | * The XML header of a result should always explicitly declare the encoding |
||
| 14 | * Content negotiation (Accept-Charset, Content-Type 'charset' parameter, etc.) should be used to specify encodings and forms |
||
| 15 | * A SPARQL query whose Accept header specifies JSON must always return results in utf-8 if no other Charset is requested |
||
| 16 | |||
| 17 | Other: |
||
| 18 | * Acceptance and conversion of other encodings for incoming data? |
||
| 19 | * Collations? |
||
| 20 | * Date comparisons? |
||
| 21 | * Other locale-specific logic? |