Обсуждение: Is it necessary to rewrite table while increasing the scale ofdatatype numeric?

Поиск
Список
Период
Сортировка

Is it necessary to rewrite table while increasing the scale ofdatatype numeric?

От
wangshuo@highgo.com.cn
Дата:
Hi, Hackers!

I find that it takes a long time when I increase the scale of a numeric
datatype.
By checking the code, I found that's because it needs to rewrite that
table's file.
After checking that table's data file, I found only parameter n_header
changed.
And, I found the data in that numeric field never changed.
So I thank It's not necessary to rewrite the table's file in this case.

Anyone has more idea about this, please come to talk about this!

Best Regards!

      Yours,
      Wang Shuo
      HighGo Software Co.,Ltd.
      September 4, 2013



Re: Is it necessary to rewrite table while increasing the scale of datatype numeric???

От
Noah Misch
Дата:
On Wed, Sep 04, 2013 at 12:08:48PM +0800, wangshuo@highgo.com.cn wrote:
> I find that it takes a long time when I increase the scale of a numeric  
> datatype.
> By checking the code, I found that's because it needs to rewrite that  
> table's file.
> After checking that table's data file, I found only parameter n_header  
> changed.
> And, I found the data in that numeric field never changed.
> So I thank It's not necessary to rewrite the table's file in this case.

n_header is part of the numeric field's data.  That's not just pedantry: the
display scale stored in n_header affects how numeric_out() formats the value.

-- 
Noah Misch
EnterpriseDB                                 http://www.enterprisedb.com



Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?

От
Jeff Janes
Дата:
On Tue, Sep 3, 2013 at 9:08 PM,  <wangshuo@highgo.com.cn> wrote:
> Hi, Hackers!
>
> I find that it takes a long time when I increase the scale of a numeric
> datatype.
> By checking the code, I found that's because it needs to rewrite that
> table's file.
> After checking that table's data file, I found only parameter n_header
> changed.
> And, I found the data in that numeric field never changed.
> So I thank It's not necessary to rewrite the table's file in this case.
>
> Anyone has more idea about this, please come to talk about this!

This was fixed in version 9.2.  You must be using an older version.

Cheers,

Jeff



Re: Is it necessary to rewrite table while increasing the scale ofdatatype numeric???

От
wangshuo@highgo.com.cn
Дата:
于 2013-09-04 19:30, Noah Misch 回复:
> On Wed, Sep 04, 2013 at 12:08:48PM +0800, wangshuo@highgo.com.cn
> wrote:
>> I find that it takes a long time when I increase the scale of a
>> numeric
>> datatype.
>> By checking the code, I found that's because it needs to rewrite
>> that
>> table's file.
>> After checking that table's data file, I found only parameter
>> n_header
>> changed.
>> And, I found the data in that numeric field never changed.
>> So I thank It's not necessary to rewrite the table's file in this
>> case.
>

Noah Misch <noah@leadboat.com> wrote:
> n_header is part of the numeric field's data.  That's not just
> pedantry: the
> display scale stored in n_header affects how numeric_out() formats
> the value.

Thanks for your reply.

Just because of what you said, I think increasing scale only lead to
differently
diaplay. There's no difference between 5.25 and 5.2500 in use.
So thers's no need to rewrite the table.




Re: Is it necessary to rewrite table while increasing thescale of datatype numeric?

От
wangshuo@highgo.com.cn
Дата:
于 2013-09-04 23:41, Jeff Janes 回复:
> On Tue, Sep 3, 2013 at 9:08 PM,  <wangshuo@highgo.com.cn> wrote:
>> Hi, Hackers!
>>
>> I find that it takes a long time when I increase the scale of a
>> numeric
>> datatype.
>> By checking the code, I found that's because it needs to rewrite
>> that
>> table's file.
>> After checking that table's data file, I found only parameter
>> n_header
>> changed.
>> And, I found the data in that numeric field never changed.
>> So I thank It's not necessary to rewrite the table's file in this
>> case.
>>
>> Anyone has more idea about this, please come to talk about this!
>

Jeff Janes <jeff.janes@gmail.com> wrote:
> This was fixed in version 9.2.  You must be using an older version.
>
> Cheers,
>
> Jeff

Thanks for your reply.

To declare a column of type numeric use the syntax:
NUMERIC(precision, scale).
What I said is this scale,not yours.


I made a test on PG9.2, as fellow:
postgres=# select version();
                                                    version
--------------------------------------------------------------------------------------------------------------
  PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
4.7.2 20121109 (Red Hat 4.7.2-8), 64-bit
(1 row)

postgres=# create table tt(t1 numeric(10,5));
CREATE TABLE
postgres=# insert into tt values (2.56);
INSERT 0 1
postgres=# select pg_relation_filepath('tt');
  pg_relation_filepath
----------------------
  base/12914/16384
(1 row)

postgres=# alter table tt alter COLUMN t1 type numeric(10,6);
ALTER TABLE
postgres=# select pg_relation_filepath('tt');
  pg_relation_filepath
----------------------
  base/12914/16387
(1 row)

So thers's no need to rewrite the table.

      Wang Shuo
      HighGo Software Co.,Ltd.
      September 5, 2013







Re: Is it necessary to rewrite table while increasing the scale of datatype numeric???

От
Noah Misch
Дата:
On Thu, Sep 05, 2013 at 11:07:43AM +0800, wangshuo@highgo.com.cn wrote:
> ??? 2013-09-04 19:30, Noah Misch ??????:
>> On Wed, Sep 04, 2013 at 12:08:48PM +0800, wangshuo@highgo.com.cn  
>> wrote:
>>> I find that it takes a long time when I increase the scale of a  
>>> numeric
>>> datatype.
>>> By checking the code, I found that's because it needs to rewrite  
>>> that
>>> table's file.
>>> After checking that table's data file, I found only parameter  
>>> n_header
>>> changed.
>>> And, I found the data in that numeric field never changed.
>>> So I thank It's not necessary to rewrite the table's file in this  
>>> case.
>>
>
> Noah Misch <noah@leadboat.com> wrote:
>> n_header is part of the numeric field's data.  That's not just  
>> pedantry: the
>> display scale stored in n_header affects how numeric_out() formats the 
>> value.
>
> Thanks for your reply.
>
> Just because of what you said, I think increasing scale only lead to  
> differently
> diaplay. There's no difference between 5.25 and 5.2500 in use.
> So thers's no need to rewrite the table.

Right or wrong, our numeric type caters to applications that do care about the
difference between those outputs.  I grant that others do not care.

-- 
Noah Misch
EnterpriseDB                                 http://www.enterprisedb.com



Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?

От
Jeff Janes
Дата:
On Wed, Sep 4, 2013 at 10:06 PM,  <wangshuo@highgo.com.cn> wrote:
> 于 2013-09-04 23:41, Jeff Janes 回复:
>
>> On Tue, Sep 3, 2013 at 9:08 PM,  <wangshuo@highgo.com.cn> wrote:
>>>
>>> Hi, Hackers!
>>>
>>> I find that it takes a long time when I increase the scale of a numeric
>>> datatype.
>>> By checking the code, I found that's because it needs to rewrite that
>>> table's file.
>>> After checking that table's data file, I found only parameter n_header
>>> changed.
>>> And, I found the data in that numeric field never changed.
>>> So I thank It's not necessary to rewrite the table's file in this case.
>>>
>>> Anyone has more idea about this, please come to talk about this!
>>
>>
>
> Jeff Janes <jeff.janes@gmail.com> wrote:
>>
>> This was fixed in version 9.2.  You must be using an older version.
>>
>> Cheers,
>>
>> Jeff
>
>
> Thanks for your reply.
>
> To declare a column of type numeric use the syntax:
> NUMERIC(precision, scale).
> What I said is this scale,not yours.

You're right, I had tested a change in precision, not in scale.  Sorry.

In order to avoid the rewrite, the code would have to be changed to
look up the column definition and if it specifies the scale, then
ignore the per-row n_header, and look at the n_header only if the
column is NUMERIC with no precision or scale.  That should
conceptually be possible, but I don't know how hard it would be to
implement--it sounds pretty invasive to me.  Then if the column was
altered from NUMERIC with scale to be a plain NUMERIC, it would have
to rewrite the table to enforce the row-wise scale to match the old
column-wise scale.  Where as now that alter doesn't need a re-write.
I don't know if this would be an overall gain or not.

Cheers,

Jeff



Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?

От
Greg Stark
Дата:

On Thu, Sep 5, 2013 at 6:41 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
Then if the column was
altered from NUMERIC with scale to be a plain NUMERIC, it would have
to rewrite the table to enforce the row-wise scale to match the old
column-wise scale.  Where as now that alter doesn't need a re-write.
I don't know if this would be an overall gain or not.

We've talked about cases like this in the past. It's mostly a SOP and I think it may already be on the TODO.

The main difficulty is that Postgres is very extensible. So to implement this you need to think bigger than NUMERIC. It should also be possible to alter a column from varchar(5) to varchar(10) for example (but not the other way around).

One way to do it would be to extend pg_type to have another column that specifies a function. That function would take the old and new typmod (which is what stores the 5 in varchar(5)) and tell the server whether it's a safe change to make without rechecking.

Another way might be to overload the cast functions, though they currently receive no information about the typmod. They might have the benefit of being able to handle things like varchar(5) -> text though.

But it has to be that general. Any data type should be able to specify whether an old and new typmod are compatible.

--
greg

Re: Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?

От
Alvaro Herrera
Дата:
Greg Stark escribió:

> The main difficulty is that Postgres is very extensible. So to implement
> this you need to think bigger than NUMERIC. It should also be possible to
> alter a column from varchar(5) to varchar(10) for example (but not the
> other way around).

We already allow that.  See commits
8f9fe6edce358f7904e0db119416b4d1080a83aa and
3cc0800829a6dda5347497337b0cf43848da4acf

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



Re: Is it necessary to rewrite table while increasing the scale of datatype numeric?

От
Noah Misch
Дата:
On Thu, Sep 05, 2013 at 10:41:25AM -0700, Jeff Janes wrote:
> In order to avoid the rewrite, the code would have to be changed to
> look up the column definition and if it specifies the scale, then
> ignore the per-row n_header, and look at the n_header only if the
> column is NUMERIC with no precision or scale.  That should
> conceptually be possible, but I don't know how hard it would be to
> implement--it sounds pretty invasive to me.  Then if the column was
> altered from NUMERIC with scale to be a plain NUMERIC, it would have
> to rewrite the table to enforce the row-wise scale to match the old
> column-wise scale.  Where as now that alter doesn't need a re-write.
> I don't know if this would be an overall gain or not.

Invasive indeed.  The type-supplementary data would need to reach essentially
everywhere we now convey a type OID.  Compare the invasiveness of adding
collation support.  However, this is not the first time it would have been
useful.  We currently store a type OID in every array and composite datum.
That's wasteful and would be unnecessary if we reliably marshalled similar
information to all the code needing it.  Given a few more use cases, the
effort would perhaps start to look credible relative to the benefits.

--
Noah Misch
EnterpriseDB                                 http://www.enterprisedb.com



Re: Re: [HACKERS] Is it necessary to rewrite table whileincreasing the scale of datatype numeric?

От
wangshuo@highgo.com.cn
Дата:
于 2013-09-06 01:41, Jeff Janes 回复:
> On Wed, Sep 4, 2013 at 10:06 PM,  <wangshuo@highgo.com.cn> wrote:
>> 于 2013-09-04 23:41, Jeff Janes 回复:
>>
>>> On Tue, Sep 3, 2013 at 9:08 PM,  <wangshuo@highgo.com.cn> wrote:
>>>>
>>>> Hi, Hackers!
>>>>
>>>> I find that it takes a long time when I increase the scale of a
>>>> numeric
>>>> datatype.
>>>> By checking the code, I found that's because it needs to rewrite
>>>> that
>>>> table's file.
>>>> After checking that table's data file, I found only parameter
>>>> n_header
>>>> changed.
>>>> And, I found the data in that numeric field never changed.
>>>> So I thank It's not necessary to rewrite the table's file in this
>>>> case.
>>>>
>>>> Anyone has more idea about this, please come to talk about this!
>>>
>>>
>>
>> Jeff Janes <jeff.janes@gmail.com> wrote:
>>>
>>> This was fixed in version 9.2.  You must be using an older version.
>>>
>>> Cheers,
>>>
>>> Jeff
>>
>>
>> Thanks for your reply.
>>
>> To declare a column of type numeric use the syntax:
>> NUMERIC(precision, scale).
>> What I said is this scale,not yours.
>

Jeff Janes <jeff.janes@gmail.com> wrote:
> You're right, I had tested a change in precision, not in scale.
> Sorry.
>
> In order to avoid the rewrite, the code would have to be changed to
> look up the column definition and if it specifies the scale, then
> ignore the per-row n_header, and look at the n_header only if the
> column is NUMERIC with no precision or scale.  That should
> conceptually be possible, but I don't know how hard it would be to
> implement--it sounds pretty invasive to me.  Then if the column was
> altered from NUMERIC with scale to be a plain NUMERIC, it would have
> to rewrite the table to enforce the row-wise scale to match the old
> column-wise scale.  Where as now that alter doesn't need a re-write.
> I don't know if this would be an overall gain or not.
>
> Cheers,
>
> Jeff

I modified the code for this situation.I consider it very simple.
It will does not modify the table file, when the scale has been
increased exclusively.

I modified the code , as follow:

static bool ATColumnChangeRequiresRewrite(Node *expr, AttrNumber
varattno, int32 oldtypemod, int32 newtypemod);


in function ATExecAlterColumnGenericOptions:

                 if (ATColumnChangeRequiresRewrite(transform, attnum,
attTup->atttypmod, targettypmod))
              tab->rewrite = true;

in the function ATColumnChangeRequiresRewrite:

        else if (IsA(expr, FuncExpr))
        {
            int32    between = 0;

                         /*
                          * Check whether funcresulttype == 1700 and
funcid == 1703 when user modify datatype.
                          * If true, then we know user modify the
datatype numeric;
                          * Then we go to get value 'between'.
                          */
            if(((FuncExpr *) expr)->funcresulttype == 1700 && ((FuncExpr *)
expr)->funcid == 1703)
                between = newtypemod - oldtypemod;

                         /*
                          * If 'between' satisfy the following
condition,
                          * Then we know the scale of the numeric was
increased.
                          */
            if(between > 0 && between < 1001)
                return false;
            else
                return true;
        }

I packed a patch about this modification.

      Wang Shuo
      HighGo Software Co.,Ltd.
      September 6, 2013
Вложения
"wangshuo@highgo.com.cn" <wangshuo@highgo.com.cn> wrote:

> I modified the code for this situation.I consider it very simple.

> It will does not modify the table file, when the scale has been
> increased exclusively.

This patch would allow data in a column which was not consistent
with the column definition:

test=# create table n (val numeric(5,2));
CREATE TABLE
test=# insert into n values ('123.45');
INSERT 0 1
test=# select * from n;
  val  
--------
 123.45
(1 row)

test=# alter table n alter column val type numeric(5,4);
ALTER TABLE
test=# select * from n;
  val  
--------
 123.45
(1 row)

Without your patch the ALTER TABLE command gets this error (as it
should):

test=# alter table n alter column val type numeric(5,4);
ERROR:  numeric field overflow
DETAIL:  A field with precision 5, scale 4 must round to an absolute value less than 10^1.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company




On Thu, Sep 5, 2013 at 8:53 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Greg Stark escribió:

> The main difficulty is that Postgres is very extensible. So to implement
> this you need to think bigger than NUMERIC. It should also be possible to
> alter a column from varchar(5) to varchar(10) for example (but not the
> other way around).

We already allow that.  See commits
8f9fe6edce358f7904e0db119416b4d1080a83aa and
3cc0800829a6dda5347497337b0cf43848da4acf

Ah, nice. i missed that.

So the issue here is that NUMERIC has an additional concept of scale that is buried in the values and that this scale is set based on the typmod that was in effect when the value was stored. If you change the typmod on the column it currently rescales all the values in the table? There's even a comment to that effect on the commit you pointed at.

But I wonder if we could just declare that that's not what the scale typmod does. That it's just a maximum scale but it's perfectly valid for NUMERIC data with lower scales to be stored in a column than the typmod says. In a way the current behaviour is like bpchar but it would be nice if it was more like varchar


--
greg
On Fri, Sep 6, 2013 at 12:34 PM, Greg Stark <stark@mit.edu> wrote:
> But I wonder if we could just declare that that's not what the scale typmod
> does. That it's just a maximum scale but it's perfectly valid for NUMERIC
> data with lower scales to be stored in a column than the typmod says. In a
> way the current behaviour is like bpchar but it would be nice if it was more
> like varchar

Sure, but the point is that 5.0000 is not the same as 5.000 today.  If
you start whacking this around you'll be changing that behavior, I
think.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



On 09/06/2013 07:57 PM, Robert Haas wrote:
> On Fri, Sep 6, 2013 at 12:34 PM, Greg Stark <stark@mit.edu> wrote:
>> But I wonder if we could just declare that that's not what the scale typmod
>> does. That it's just a maximum scale but it's perfectly valid for NUMERIC
>> data with lower scales to be stored in a column than the typmod says. In a
>> way the current behaviour is like bpchar but it would be nice if it was more
>> like varchar
> Sure, but the point is that 5.0000 is not the same as 5.000 today.  If
> you start whacking this around you'll be changing that behavior, I
> think.
>
So we already get it wrong by rewriting ?



--
Hannu Krosing
PostgreSQL Consultant
Performance, Scalability and High Availability
2ndQuadrant Nordic OÜ




On Fri, Sep 6, 2013 at 2:34 PM, Hannu Krosing <hannu@2ndquadrant.com> wrote:
> On 09/06/2013 07:57 PM, Robert Haas wrote:
>> On Fri, Sep 6, 2013 at 12:34 PM, Greg Stark <stark@mit.edu> wrote:
>>> But I wonder if we could just declare that that's not what the scale typmod
>>> does. That it's just a maximum scale but it's perfectly valid for NUMERIC
>>> data with lower scales to be stored in a column than the typmod says. In a
>>> way the current behaviour is like bpchar but it would be nice if it was more
>>> like varchar
>> Sure, but the point is that 5.0000 is not the same as 5.000 today.  If
>> you start whacking this around you'll be changing that behavior, I
>> think.
>>
> So we already get it wrong by rewriting ?

Ah, no, I don't think so.  If you have 5.00000000000000000 and lower
the scale, it'll truncate off some of those zeroes to make it fit.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Robert Haas <robertmhaas@gmail.com> writes:
> Sure, but the point is that 5.0000 is not the same as 5.000 today.  If
> you start whacking this around you'll be changing that behavior, I
> think.

Yeah.  And please note that no matter what the OP may think, a lot of
people *do* consider that there's a useful distinction between 5.000
and 5.0000 --- it might indicate the number of significant digits in
a measurement, for example.  I do not see us abandoning that just to make
certain cases of ALTER TABLE faster.

There was some upthread discussion about somehow storing the scale info
at the column level rather than the individual-datum level.  If we could
do that, then it'd be possible to make this type of ALTER TABLE fast.
However, the work involved to do that seems enormously out of proportion
to the benefit, mainly because there just isn't any convenient way to
trace a Datum to its source column, even assuming it's got one.

            regards, tom lane



On Fri, Sep 6, 2013 at 9:34 AM, Greg Stark <stark@mit.edu> wrote:
>
>
> But I wonder if we could just declare that that's not what the scale typmod
> does. That it's just a maximum scale but it's perfectly valid for NUMERIC
> data with lower scales to be stored in a column than the typmod says. In a
> way the current behaviour is like bpchar but it would be nice if it was more
> like varchar

I agree that this makes more sense than what is currently done.  But
are we going to break backwards compatibility to achieve it?  Do the
standards specify a behavior here?

Cheers,

Jeff



Greg Stark <stark@mit.edu> writes:
> But I wonder if we could just declare that that's not what the scale typmod
> does. That it's just a maximum scale but it's perfectly valid for NUMERIC
> data with lower scales to be stored in a column than the typmod says. In a
> way the current behaviour is like bpchar but it would be nice if it was
> more like varchar

BTW, note that if you want varying scale in a column, you can declare it
as unconstrained "numeric".  So that case corresponds to "text", whereas
as you rightly say, numeric(m,n) is more like bpchar(n).  It's true there
is nothing corresponding to varchar(n), but how much do you really need
that case?  The SQL standard didn't see fit to invent a variant of numeric
that worked that way, so they at least aren't buying it.

            regards, tom lane



Re: [HACKERS] Is it necessary to rewrite table while increasing the scale of datatype numeric?

От
Jim Nasby
Дата:
On 9/5/13 10:47 PM, Noah Misch wrote:
> On Thu, Sep 05, 2013 at 10:41:25AM -0700, Jeff Janes wrote:
>> In order to avoid the rewrite, the code would have to be changed to
>> look up the column definition and if it specifies the scale, then
>> ignore the per-row n_header, and look at the n_header only if the
>> column is NUMERIC with no precision or scale.  That should
>> conceptually be possible, but I don't know how hard it would be to
>> implement--it sounds pretty invasive to me.  Then if the column was
>> altered from NUMERIC with scale to be a plain NUMERIC, it would have
>> to rewrite the table to enforce the row-wise scale to match the old
>> column-wise scale.  Where as now that alter doesn't need a re-write.
>> I don't know if this would be an overall gain or not.
>
> Invasive indeed.  The type-supplementary data would need to reach essentially
> everywhere we now convey a type OID.  Compare the invasiveness of adding
> collation support.  However, this is not the first time it would have been
> useful.  We currently store a type OID in every array and composite datum.
> That's wasteful and would be unnecessary if we reliably marshalled similar
> information to all the code needing it.  Given a few more use cases, the
> effort would perhaps start to look credible relative to the benefits.

Aren't there cases where PL/pgsql gets hosed by this? Or even functions in general?

I also have a vague memory of some features that would benefit from being able to have typemod info available at a
tuplelevel in a table, not just for the entire table. Unfortunately I can't remember why we wanted that... (Alvaro, do
yourecall? I'm pretty sure it's something we'd discussed at some point.) 
--
Jim C. Nasby, Data Architect                       jim@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net



Re: Is it necessary to rewrite table while increasing the scale of datatype numeric?

От
Alvaro Herrera
Дата:
Jim Nasby escribió:

> I also have a vague memory of some features that would benefit from
> being able to have typemod info available at a tuple level in a table,
> not just for the entire table. Unfortunately I can't remember why we
> wanted that... (Alvaro, do you recall? I'm pretty sure it's something
> we'd discussed at some point.)

I don't remember anything concrete either.  Maybe it was the E.164 type
stuff, where we wanted auxiliary info to dictate what kind of pattern to
{match on input, apply on output} ?

--
Álvaro Herrera                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services



On Fri, Sep 6, 2013 at 11:47 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
> Sure, but the point is that 5.0000 is not the same as 5.000 today.  If
> you start whacking this around you'll be changing that behavior, I
> think.

Yeah.  And please note that no matter what the OP may think, a lot of
people *do* consider that there's a useful distinction between 5.000
and 5.0000 --- it might indicate the number of significant digits in
a measurement, for example.  I do not see us abandoning that just to make
certain cases of ALTER TABLE faster.

But note that the current behavior is worse in this regard.  If you specify a scale of 4 at the column level, than it is not possible to distinguish between 5.000 and 5.0000 on a per-value basis within that column.  If the scale at the column level was taken as the maximum scale, not the only allowed one, then that distinction could be recorded. That behavior seems more sensible to me (metrologically speaking, regardless of alter table performance aspects), but I don't see how to get there from here with acceptable compatibility breakage.  

My lesson from going over this thread is, just use numeric, not numeric(x,y), unless you are storing currency or need compatibility with a different database system (in which case, good luck with that).

Cheers,

Jeff

Re: Re: [HACKERS] Is it necessaryto rewrite table while increasing the scale of datatype numeric?

От
wangshuo@highgo.com.cn
Дата:
> "wangshuo@highgo.com.cn" <wangshuo@highgo.com.cn> wrote:
>
>> I modified the code for this situation.I consider it very simple.
>
>> It will does not modify the table file, when the scale has been
>> increased exclusively.
>

Kevin Grittner <kgrittn@ymail.com> wrote:
> This patch would allow data in a column which was not consistent
> with the column definition:
>
> test=# create table n (val numeric(5,2));
> CREATE TABLE
> test=# insert into n values ('123.45');
> INSERT 0 1
> test=# select * from n;
>   val  
> --------
>  123.45
> (1 row)
>
> test=# alter table n alter column val type numeric(5,4);
> ALTER TABLE
> test=# select * from n;
>   val  
> --------
>  123.45
> (1 row)
>
> Without your patch the ALTER TABLE command gets this error (as it
> should):
>
> test=# alter table n alter column val type numeric(5,4);
> ERROR:  numeric field overflow
> DETAIL:  A field with precision 5, scale 4 must round to an absolute
> value less than 10^1.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company

Thanks for your reply and test.
I had added a new function named ATNumericColumnChangeRequiresCheck to
check the data
when the scale of numeric increase.
Now,the ALTER TABLE command could prompt this error:

postgres=# alter table tt alter  COLUMN t1 type numeric (5,4);
ERROR:  numeric field overflow
DETAIL:  A field with precision 5, scale 4 must round to an absolute
value less than 10^1.
STATEMENT:  alter table tt alter  COLUMN t1 type numeric (5,4);
ERROR:  numeric field overflow
DETAIL:  A field with precision 5, scale 4 must round to an absolute
value less than 10^1.

I packed a new patch about this modification.

I think this  ' altering field  type model ' could modify all the type
in database.
Make different modification to column‘s datatype for different
situation.
For example when you modify the scale of numeric, if you think that the
5.0 and 5.00 is different,
the table file must be rewritten; otherwise, needn't be rewritten.


      Wang Shuo
      HighGo Software Co.,Ltd.
      September 16, 2013

Вложения
On Sun, Sep 15, 2013 at 8:05 PM, Jeff Janes <jeff.janes@gmail.com> wrote:
> But note that the current behavior is worse in this regard.  If you specify
> a scale of 4 at the column level, than it is not possible to distinguish
> between 5.000 and 5.0000 on a per-value basis within that column.  If the
> scale at the column level was taken as the maximum scale, not the only
> allowed one, then that distinction could be recorded. That behavior seems
> more sensible to me (metrologically speaking, regardless of alter table
> performance aspects), but I don't see how to get there from here with
> acceptable compatibility breakage.

I think I'd probably agree with that in a green field, but as you say,
I can't see accepting the backward compatibility break at this point.
After all, you can get variable-precision in a single column by
declaring it as unqualified numeric.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company