Re: UTF8 national character data type support WIP patch and list of open issues.
От | MauMau |
---|---|
Тема | Re: UTF8 national character data type support WIP patch and list of open issues. |
Дата | |
Msg-id | 6C7E3D3C38CE4E0BAF20269F7E2DBA21@maumau обсуждение исходный текст |
Ответ на | Re: UTF8 national character data type support WIP patch and list of open issues. (Albe Laurenz <laurenz.albe@wien.gv.at>) |
Ответы |
Re: UTF8 national character data type support WIP patch
and list of open issues.
|
Список | pgsql-hackers |
From: "Albe Laurenz" <laurenz.albe@wien.gv.at> > If I understood the discussion correctly the use case is that > there are advantages to having a database encoding different > from UTF-8, but you'd still want sume UTF-8 columns. > > Wouldn't it be a better design to allow specifying the encoding > per column? That would give you more flexibility. Yes, you are right. In the previous discussion: - That would be nice if available, but it is hard to implement multiple encodings in one database. - Some people (I'm not sure many or few) are NCHAR/NVARCHAR in other DBMSs. To invite them to PostgreSQL, it's important to support national character feature syntactically and document it in the manual. This is the first step. - As the second step, we can implement multiple encodings in one database. According to the SQL standard, "NCHAR(n)" is equivalent to "CHAR(n) CHARACTER SET cs", where cs is an implementation-defined character set. Regards MauMau
В списке pgsql-hackers по дате отправления: