Обсуждение: 64-bit integers for GUC

Поиск
Список
Период
Сортировка

64-bit integers for GUC

От
Peter Eisentraut
Дата:
ISTM that before long someone will want to use more than 2 GB for work_mem.  
Currently, you can't set more because it overflows the variable.  I'm not 
sure a wholesale switch of GUC integers to 64 bit is the solution.  Maybe 
changing some of the variables to reals would work.  Comments?

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/


Re: 64-bit integers for GUC

От
Peter Eisentraut
Дата:
Am Dienstag, 25. Juli 2006 14:15 schrieb Tom Lane:
> Peter Eisentraut <peter_e@gmx.net> writes:
> > ISTM that before long someone will want to use more than 2 GB for
> > work_mem. Currently, you can't set more because it overflows the
> > variable.
>
> Yes you can, because the value is measured in KB.

Right, so there is probably a bug in my patch ...  Nevermind then.  All the 
other options are OK with 32 bit ints.

> I'd be fairly worried about whether that wouldn't mean we fail
> completely on INT64_IS_BROKEN platforms ...

I wonder whether platforms with INT64_IS_BROKEN can address more than 2GB of 
memory anyway.

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/


Re: 64-bit integers for GUC

От
Tom Lane
Дата:
Peter Eisentraut <peter_e@gmx.net> writes:
> Am Dienstag, 25. Juli 2006 14:15 schrieb Tom Lane:
>> I'd be fairly worried about whether that wouldn't mean we fail
>> completely on INT64_IS_BROKEN platforms ...

> I wonder whether platforms with INT64_IS_BROKEN can address more than 2GB of 
> memory anyway.

No, surely they can't (on all machines we support, "long" is at least as
wide as a pointer, cf Datum).  I'm just worried about whether normal GUC
behavior would work at all on such a machine.  We've so far tried to
preserve "it works as long as you don't try to use values larger than
2G" on such machines, and I'm not quite prepared to give that up.
        regards, tom lane


Re: 64-bit integers for GUC

От
Tom Lane
Дата:
Peter Eisentraut <peter_e@gmx.net> writes:
> ISTM that before long someone will want to use more than 2 GB for work_mem.  
> Currently, you can't set more because it overflows the variable.

Yes you can, because the value is measured in KB.

Now, if you were to redefine it as being measured in bytes, you would
have a backlash, because people already are using values above 2GB.

> I'm not sure a wholesale switch of GUC integers to 64 bit is the
> solution.

I'd be fairly worried about whether that wouldn't mean we fail
completely on INT64_IS_BROKEN platforms ...
        regards, tom lane


Re: 64-bit integers for GUC

От
Josh Berkus
Дата:
Peter,

> I wonder whether platforms with INT64_IS_BROKEN can address more than 2GB of 
> memory anyway.
> 

To be quite frank, current PostgreSQL can't effectively use more than 
256mb of work_mem anyway.  We'd like to fix that, but it's not fixed yet 
AFAIK.

--Josh


Re: 64-bit integers for GUC

От
Robert Treat
Дата:
On Tuesday 25 July 2006 14:28, Josh Berkus wrote:
> Peter,
>
> > I wonder whether platforms with INT64_IS_BROKEN can address more than 2GB
> > of memory anyway.
>
> To be quite frank, current PostgreSQL can't effectively use more than
> 256mb of work_mem anyway.  We'd like to fix that, but it's not fixed yet
> AFAIK.
>

Josh, can you clarify this statement for me? Using work mem of higher than 
256MB is common practice in certain cases (db restore for example).  Are you 
speaking in a high volume OLTP sense, or something beyond this?

-- 
Robert Treat
Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL


Re: 64-bit integers for GUC

От
Tom Lane
Дата:
Robert Treat <xzilla@users.sourceforge.net> writes:
> On Tuesday 25 July 2006 14:28, Josh Berkus wrote:
>> To be quite frank, current PostgreSQL can't effectively use more than
>> 256mb of work_mem anyway.  We'd like to fix that, but it's not fixed yet
>> AFAIK.

> Josh, can you clarify this statement for me?

Perhaps I shouldn't put words in Josh' mouth, but I *think* what he
meant is that the tuplesort code does not get any faster once work_mem
exceeds a few hundred meg.  I believe we've addressed that to some
extent in CVS HEAD, but it's a fair gripe against the existing release
branches.

I'm not aware that anyone has done any work to characterize performance
vs work_mem setting for any of the other uses of work_mem (such as hash
table sizes).
        regards, tom lane