Обсуждение: the right time to vacuum database?
how can i know that it's the time to vacuumdb? i set the crontab to vacuumdb 3 times in one day. because my database size increase from 440MB to 460MB in 8 hours.
On Tue, 2007-06-05 at 03:33 -0700, Charles.Hou wrote: > how can i know that it's the time to vacuumdb? i set the crontab to > vacuumdb 3 times in one day. because my database size increase from > 440MB to 460MB in 8 hours. Have you looked at autovaccum? It can handle this for you. If not, you need to examine the output of VACUUM VERBOSE or pg_stattuple (in contrib) and set a policy based on the output. It's entirely possible that some tables will need to get vacuumed frequently, and others will not. -- Brad Nicholson 416-673-4106 Database Administrator, Afilias Canada Corp.
On Tue, Jun 05, 2007 at 03:33:09AM -0700, Charles.Hou wrote: > how can i know that it's the time to vacuumdb? i set the crontab to > vacuumdb 3 times in one day. because my database size increase from > 440MB to 460MB in 8 hours. You haven't told us enough. What's the churn on the database, to begin with? That's only 20M. If you replace approximately 20M in 8 hours, then it might be that the FSM is working perfectly, and this is the overhead that is reasonable to keep around to allow your database to work perfectly. Also, have you looked into autovacuum? It might take some of the guesswork out of this. Finally, I expect that most of your churn is in particular tables. Those are the ones you want to vacuum frequently. The rest of them aren't that interesting. The way to learn about this is to use VACUUM VERBOSE, which will give you information about how many pages it is recovering. A -- Andrew Sullivan | ajs@crankycanuck.ca "The year's penultimate month" is not in truth a good way of saying November. --H.W. Fowler
Charles.Hou wrote: > how can i know that it's the time to vacuumdb? i set the crontab to > vacuumdb 3 times in one day. because my database size increase from > 440MB to 460MB in 8 hours. > > > ---------------------------(end of broadcast)--------------------------- > TIP 1: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majordomo@postgresql.org so that your > message can get through to the mailing list cleanly > What Version of PostgreSQL are you using? If it's recent ( > 8.1) you might benefit from using AutoVacuum (AVD), which will take care of vacuuming for you once table fragmentation (the ratio of dead tuples to total tuples) becomes an issue. Otherwise, it's hard to tell based on your metric above. You may have one small table that is updated a lot (in which case a vacuum is a good idea on that table) or lots of tables that are updated a little (in which its probably overkill). Your best bet might be to turn on row level stats collection and look at the statistics to see if one or more tables is becoming fragmented (BTW, this is what AVD does). 440 to 460 doesn't seem to significant to me, but it all depends on the size of your tables, update/delete profile, etc. -- Chander Ganesan The Open Technology Group One Copley Parkway, Suite 210 Morrisville, NC 27560 Phone: 877-258-8987/919-463-0999 http://www.otg-nc.com
On 6 5 , 9 12 , bnich...@ca.afilias.info (Brad Nicholson) wrote: > On Tue, 2007-06-05 at 03:33 -0700, Charles.Hou wrote: > > how can i know that it's the time to vacuumdb? i set the crontab to > > vacuumdb 3 times in one day. because my database size increase from > > 440MB to 460MB in 8 hours. > > Have you looked at autovaccum? It can handle this for you. > > If not, you need to examine the output of VACUUM VERBOSE or pg_stattuple > (in contrib) and set a policy based on the output. > > It's entirely possible that some tables will need to get vacuumed > frequently, and others will not. > > -- > Brad Nicholson 416-673-4106 > Database Administrator, Afilias Canada Corp. > > ---------------------------(end of broadcast)--------------------------- > TIP 3: Have you checked our extensive FAQ? > > http://www.postgresql.org/docs/faq where is the contrib directory? i can't find it.
On 6 5 , 9 25 , a...@crankycanuck.ca (Andrew Sullivan) wrote: > On Tue, Jun 05, 2007 at 03:33:09AM -0700, Charles.Hou wrote: > > how can i know that it's the time to vacuumdb? i set the crontab to > > vacuumdb 3 times in one day. because my database size increase from > > 440MB to 460MB in 8 hours. > > You haven't told us enough. What's the churn on the database, to > begin with? That's only 20M. If you replace approximately 20M in 8 > hours, then it might be that the FSM is working perfectly, and this > is the overhead that is reasonable to keep around to allow your > database to work perfectly. > > Also, have you looked into autovacuum? It might take some of the > guesswork out of this. > > Finally, I expect that most of your churn is in particular tables. > Those are the ones you want to vacuum frequently. The rest of them > aren't that interesting. The way to learn about this is to use > VACUUM VERBOSE, which will give you information about how many pages > it is recovering. > > A > > -- > Andrew Sullivan | a...@crankycanuck.ca > "The year's penultimate month" is not in truth a good way of saying > November. > --H.W. Fowler > > ---------------------------(end of broadcast)--------------------------- > TIP 2: Don't 'kill -9' the postmaster the version of PostgreSQL is 8.1.3 on i686-redhat-linux-gnu,compiled by GCC 3.4.4 (Red-Hat 3.4.4-2) Client connected: about 100 pc, 1 pc with 1 connection at least, the max is 4 connections 194 tables on the database, and some of the tables always executed the update/insert command . i also set the autovacuum to enable. today, the database size is 497MB. 497-440MB=57MB, 57 MB have been increased in 18 hours. what's wrong with my PostgreSql Server? postgresql.conf #--------------------------------------------------------------------------- # CONNECTIONS AND AUTHENTICATION #--------------------------------------------------------------------------- # - Connection Settings - #listen_addresses = 'localhost' # what IP address(es) to listen on; listen_addresses = '*' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all #port = 5432 max_connections = 1000 # note: increasing max_connections costs ~400 bytes of shared memory per # connection slot, plus lock space (see max_locks_per_transaction). You # might also need to raise shared_buffers to support more connections. superuser_reserved_connections = 2 #unix_socket_directory = '' #unix_socket_group = '' #unix_socket_permissions = 0777 # octal #bonjour_name = '' # defaults to the computer name # - Security & Authentication - #authentication_timeout = 60 # 1-600, in seconds #ssl = off #password_encryption = on #db_user_namespace = off # Kerberos #krb_server_keyfile = '' #krb_srvname = 'postgres' #krb_server_hostname = '' # empty string matches any keytab entry #krb_caseins_users = off # - TCP Keepalives - # see 'man 7 tcp' for details #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds; # 0 selects the system default #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds; # 0 selects the system default #tcp_keepalives_count = 0 # TCP_KEEPCNT; # 0 selects the system default #--------------------------------------------------------------------------- # RESOURCE USAGE (except WAL) #--------------------------------------------------------------------------- # - Memory - shared_buffers = 2000 # min 16 or max_connections*2, 8KB each temp_buffers = 1000 # min 100, 8KB each #max_prepared_transactions = 5 # can be 0 or more # note: increasing max_prepared_transactions costs ~600 bytes of shared memory # per transaction slot, plus lock space (see max_locks_per_transaction). work_mem = 1024 # min 64, size in KB maintenance_work_mem = 16384 # min 1024, size in KB #max_stack_depth = 2048 # min 100, size in KB # - Free Space Map - #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each max_fsm_pages = 300000 # min max_fsm_relations*16, 6 bytes each max_fsm_relations = 1000 # min 100, ~70 bytes each #max_fsm_relations = 2000 # min 100, ~70 bytes each # - Kernel Resource Usage - #max_files_per_process = 1000 # min 25 #preload_libraries = '' # - Cost-Based Vacuum Delay - #vacuum_cost_delay = 0 # 0-1000 milliseconds vacuum_cost_delay = 200 # 0-1000 milliseconds #vacuum_cost_page_hit = 1 # 0-10000 credits vacuum_cost_page_hit = 6 # 0-10000 credits #vacuum_cost_page_miss = 10 # 0-10000 credits #vacuum_cost_page_dirty = 20 # 0-10000 credits #vacuum_cost_limit = 200 # 0-10000 credits vacuum_cost_limit = 100 # 0-10000 credits # - Background writer - #bgwriter_delay = 200 # 10-10000 milliseconds between rounds #bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round #bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round #bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round #bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round #--------------------------------------------------------------------------- # WRITE AHEAD LOG #--------------------------------------------------------------------------- # - Settings - #fsync = on # turns forced synchronization on or off #wal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync # fsync # fsync_writethrough # open_sync #full_page_writes = on # recover from partial page writes #wal_buffers = 8 # min 4, 8KB each #commit_delay = 0 # range 0-100000, in microseconds #commit_siblings = 5 # range 1-1000 # - Checkpoints - #checkpoint_segments = 3 # in logfile segments, min 1, 16MB each #checkpoint_timeout = 300 # range 30-3600, in seconds #checkpoint_warning = 30 # in seconds, 0 is off # - Archiving - archive_command = '' # command to use to archive a logfile # segment #Added, IChang, 11/28/2006 for restore #archive_command = 'cp -f %p /home/csc/DB_HotBackup/archives/%f' #--------------------------------------------------------------------------- # QUERY TUNING #--------------------------------------------------------------------------- # - Planner Method Configuration - enable_bitmapscan = on enable_hashagg = on enable_hashjoin = on enable_indexscan = on enable_mergejoin = on enable_nestloop = on enable_seqscan = off enable_sort = on enable_tidscan = on # - Planner Cost Constants - #effective_cache_size = 1000 # typically 8KB each effective_cache_size = 1300 # 2/3 of Ram random_page_cost = 4 # units are one sequential page fetch # cost #cpu_tuple_cost = 0.01 # (same) #cpu_index_tuple_cost = 0.001 # (same) #cpu_operator_cost = 0.0025 # (same) # - Genetic Query Optimizer - #geqo = on #geqo_threshold = 12 #geqo_effort = 5 # range 1-10 #geqo_pool_size = 0 # selects default based on effort #geqo_generations = 0 # selects default based on effort #geqo_selection_bias = 2.0 # range 1.5-2.0 # - Other Planner Options - #default_statistics_target = 10 # range 1-1000 #constraint_exclusion = off #from_collapse_limit = 8 #join_collapse_limit = 8 # 1 disables collapsing of explicit # JOINs #--------------------------------------------------------------------------- # ERROR REPORTING AND LOGGING #--------------------------------------------------------------------------- # - Where to Log - #log_destination = 'syslog' # Valid values are combinations of # stderr, syslog and eventlog, # depending on platform. # This is used when logging to stderr: redirect_stderr = on # Enable capturing of stderr into log #redirect_stderr = off # Enable capturing of stderr into log # files # These are only used if redirect_stderr is on: log_directory = 'pg_log' # Directory where log files are written # Can be absolute or relative to PGDATA #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name pattern. log_filename = 'postgresql-%a.log' # Log file name pattern. # Can include strftime() escapes log_truncate_on_rotation = on # If on, any existing log file of the same # name as the new log file will be # truncated rather than appended to. But # such truncation only occurs on # time-driven rotation, not on restarts # or size-driven rotation. Default is # off, meaning append to existing files # in all cases. log_rotation_age = 1440 # Automatic rotation of logfiles will # happen after so many minutes. 0 to # disable. log_rotation_size = 0 # Automatic rotation of logfiles will # happen after so many kilobytes of log # output. 0 to disable. # These are relevant when logging to syslog: #syslog_facility = 'LOCAL0' #syslog_ident = 'postgres' # - When to Log - #client_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # log # notice # warning # error #log_min_messages = notice # Values, in order of decreasing detail: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # log # fatal # panic #log_error_verbosity = default # terse, default, or verbose messages #log_min_error_statement = panic # Values in order of increasing severity: # debug5 # debug4 # debug3 # debug2 # debug1 # info # notice # warning # error # panic(off) #log_min_duration_statement = -1 # -1 is disabled, 0 logs all statements # and their durations, in milliseconds. #silent_mode = on # DO NOT USE without syslog or # redirect_stderr # - What to Log - #debug_print_parse = off #debug_print_rewritten = off #debug_print_plan = off #debug_pretty_print = off log_connections = off #log_connections = on log_disconnections = off #log_disconnections = on #log_duration = off log_duration = on #log_line_prefix = '' # Special values: #log_line_prefix = 'user=%u,db=%d' log_line_prefix = '%t %p ' # Special values: # %u = user name # %d = database name # %r = remote host and port # %h = remote host # %p = PID # %t = timestamp (no milliseconds) # %m = timestamp with milliseconds # %i = command tag # %c = session id # %l = session line number # %s = session start timestamp # %x = transaction id # %q = stop here in non-session # processes # %% = '%' # e.g. '<%u%%%d> ' #log_statement = 'none' # none, mod, ddl, all #log_statement = 'all' # none, mod, ddl, all log_statement = 'mod' # none, mod, ddl, all #log_hostname = off #--------------------------------------------------------------------------- # RUNTIME STATISTICS #--------------------------------------------------------------------------- # - Statistics Monitoring - #log_parser_stats = off #log_planner_stats = off #log_executor_stats = off #log_statement_stats = off # - Query/Index Statistics Collector - stats_start_collector = on #stats_command_string = off #stats_block_level = off #stats_row_level = off stats_row_level = on #stats_reset_on_server_start = off #--------------------------------------------------------------------------- # AUTOVACUUM PARAMETERS #--------------------------------------------------------------------------- #autovacuum = off # enable autovacuum subprocess? autovacuum = on # enable autovacuum subprocess? autovacuum_naptime = 20 # time between autovacuum runs, in secs #autovacuum_vacuum_threshold = 1000 # min # of tuple updates before autovacuum_vacuum_threshold = 1 # min # of tuple updates before # vacuum #autovacuum_analyze_threshold = 500 # min # of tuple updates before autovacuum_analyze_threshold = 1 # min # of tuple updates before # analyze #autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before autovacuum_vacuum_scale_factor = 0.1 # fraction of rel size before # vacuum autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before #autovacuum_analyze_scale_factor = 1 # fraction of rel size before # analyze #autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for # autovac, -1 means use # vacuum_cost_delay #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for # autovac, -1 means use # vacuum_cost_limit #--------------------------------------------------------------------------- # CLIENT CONNECTION DEFAULTS #--------------------------------------------------------------------------- # - Statement Behavior - #search_path = '$user,public' # schema names #default_tablespace = '' # a tablespace name, '' uses # the default #check_function_bodies = on #default_transaction_isolation = 'read committed' #default_transaction_read_only = off #statement_timeout = 0 # 0 is disabled, in milliseconds # - Locale and Formatting - #datestyle = 'iso, mdy' #timezone = unknown # actually, defaults to TZ # environment setting #australian_timezones = off #extra_float_digits = 0 # min -15, max 2 #client_encoding = sql_ascii # actually, defaults to database client_encoding = BIG5 # actually, defaults to database # encoding # These settings are initialized by initdb -- they might be changed #lc_messages = 'zh_TW.big5' # locale for system error message lc_messages = 'C' # locale for system error message # strings #lc_monetary = 'zh_TW.big5' # locale for monetary formatting lc_monetary = 'C' # locale for monetary formatting #lc_numeric = 'zh_TW.big5' # locale for number formatting lc_numeric = 'C' # locale for number formatting #lc_time = 'zh_TW.big5' # locale for time formatting lc_time = 'C' # locale for time formatting # - Other Defaults - #explain_pretty_print = on #dynamic_library_path = '$libdir' #--------------------------------------------------------------------------- # LOCK MANAGEMENT #--------------------------------------------------------------------------- #deadlock_timeout = 1000 # in milliseconds #max_locks_per_transaction = 64 # min 10 # note: each lock table slot uses ~220 bytes of shared memory, and there are # max_locks_per_transaction * (max_connections + max_prepared_transactions) # lock table slots. #--------------------------------------------------------------------------- # VERSION/PLATFORM COMPATIBILITY #--------------------------------------------------------------------------- # - Previous Postgres Versions - #add_missing_from = off #regex_flavor = advanced # advanced, extended, or basic #sql_inheritance = on #default_with_oids = off #escape_string_warning = off # - Other Platforms & Clients - #transform_null_equals = off #--------------------------------------------------------------------------- # CUSTOMIZED OPTIONS #--------------------------------------------------------------------------- #custom_variable_classes = '' # list of custom variable class names
On Tue, 2007-06-05 at 16:59 -0700, Charles.Hou wrote: > On 6 5 , 9 12 , bnich...@ca.afilias.info (Brad Nicholson) wrote: > > On Tue, 2007-06-05 at 03:33 -0700, Charles.Hou wrote: > > > how can i know that it's the time to vacuumdb? i set the crontab to > > > vacuumdb 3 times in one day. because my database size increase from > > > 440MB to 460MB in 8 hours. > > > > Have you looked at autovaccum? It can handle this for you. > > > > If not, you need to examine the output of VACUUM VERBOSE or pg_stattuple > > (in contrib) and set a policy based on the output. > > > > It's entirely possible that some tables will need to get vacuumed > > frequently, and others will not. > > > > -- > > Brad Nicholson 416-673-4106 > > Database Administrator, Afilias Canada Corp. > > > > ---------------------------(end of broadcast)--------------------------- > > TIP 3: Have you checked our extensive FAQ? > > > > http://www.postgresql.org/docs/faq > > where is the contrib directory? i can't find it. If you built your own binaries from source, there is a directory called contrib under the root of your source tree. You'll need to build the modules. If you installed a package, you'll probably need to grab a contrib package for your version as well (I'm not overly familiar with the pre-packaged binaries). Look in the following directory for your binaries: /path_to_postgres/doc/contrib If the contrib stuff is installed, you'll find the appropriate docs there. -- Brad Nicholson 416-673-4106 Database Administrator, Afilias Canada Corp.
On Tue, Jun 05, 2007 at 05:28:10PM -0700, Charles.Hou wrote: > Client connected: about 100 pc, 1 pc with 1 connection at least, the > max is 4 connections So up to 400 connections? Are they all running transactions? For any length of time? > 194 tables on the database, and some of the tables always executed > the update/insert command . So _some_ tables are getting updated? (Inserts are not really relevant to vacuuming here, but they'll surely be relevant to size) > i also set the autovacuum to enable. today, the database size is > 497MB. 497-440MB=57MB, 57 MB have been increased in 18 hours. > what's wrong with my PostgreSql Server? Maybe nothing. How much data do you think you're putting in there? What does VACUUM VERBOSE say, as I asked about -- are you getting a lot of dead space? A -- Andrew Sullivan | ajs@crankycanuck.ca The whole tendency of modern prose is away from concreteness. --George Orwell
On 6 7 , 3 51 , a...@crankycanuck.ca (Andrew Sullivan) wrote: > On Tue, Jun 05, 2007 at 05:28:10PM -0700, Charles.Hou wrote: > > Client connected: about 100 pc, 1 pc with 1 connection at least, the > > max is 4 connections > > So up to 400 connections? Are they all running transactions? For > any length of time? > > > 194 tables on the database, and some of the tables always executed > > the update/insert command . > > So _some_ tables are getting updated? (Inserts are not really > relevant to vacuuming here, but they'll surely be relevant to size) > > > i also set the autovacuum to enable. today, the database size is > > 497MB. 497-440MB=57MB, 57 MB have been increased in 18 hours. > > what's wrong with my PostgreSql Server? > > Maybe nothing. How much data do you think you're putting in there? > What does VACUUM VERBOSE say, as I asked about -- are you getting a > lot of dead space? > i use the VACUUM VERBOSE a table , but it had been hold 10 minutes . i must use the linux command " kill " to exit the VACUUM. after i kill this PID, the PostgreSql have been restart automatically. and vacuum result of the other table INFO: vacuuming "public.trade_rpt" INFO: "ftrade_rpt": found 987 removable, 117279 nonremovable row versions in 16875 pages DETAIL: 0 dead row versions cannot be removed yet. Nonremovable row versions range from 324 to 1280 bytes long. There were 9 unused item pointers. Total free space (including removable row versions) is 9059076 bytes. 138 pages are or will become empty, including 0 at the end of the table. 15838 pages containing 8772044 free bytes are potential move destinations. CPU 0.01s/0.00u sec elapsed 341.61 sec. 341.61sec???
On Thu, Jun 07, 2007 at 12:07:03AM -0700, Charles.Hou wrote: > i use the VACUUM VERBOSE a table , but it had been hold 10 minutes . i > must use the linux command " kill " to exit the VACUUM. > after i kill this PID, the PostgreSql have been restart > automatically. Maybe it was doing work. Why did you kill it? If there is a problem on that table, I wouldn't be surprised that it'd take 10 minutes. You just caused all the work it started to do to be undone, and to cause more bloat. But. . . > DETAIL: 0 dead row versions cannot be removed yet. . . . this suggests you don't have bloat on that table at least. So it sounds to me like your disk use is going up because you have a lot of data. A -- Andrew Sullivan | ajs@crankycanuck.ca In the future this spectacle of the middle classes shocking the avant- garde will probably become the textbook definition of Postmodernism. --Brad Holland
On Jun 8, 1:46 am, a...@crankycanuck.ca (Andrew Sullivan) wrote: > On Thu, Jun 07, 2007 at 12:07:03AM -0700, Charles.Hou wrote: > > i use the VACUUM VERBOSE a table , but it had been hold 10 minutes . i > > must use the linux command " kill " to exit the VACUUM. > > after i kill this PID, the PostgreSql have been restart > > automatically. > > Maybe it was doing work. Why did you kill it? If there is a problem > on that table, I wouldn't be surprised that it'd take 10 minutes. > You just caused all the work it started to do to be undone, and to > cause more bloat. But. . . > > > DETAIL: 0 dead row versions cannot be removed yet. > > . . . this suggests you don't have bloat on that table at least. So > it sounds to me like your disk use is going up because you have a lot > of data. > > A > > -- > Andrew Sullivan | a...@crankycanuck.ca > In the future this spectacle of the middle classes shocking the avant- > garde will probably become the textbook definition of Postmodernism. > --Brad Holland > > ---------------------------(end of broadcast)--------------------------- > TIP 1: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to majord...@postgresql.org so that your > message can get through to the mailing list cleanly i have traced the size of the table. About 1000 new rows will be inserted into the table in one day. each row has 300 bytes. 1000*300/1024=293K, but the size of this table had been increased 3MB. 3MB-293K=2.7MB...Why?where is the 2.7MB?
On Fri, Jun 08, 2007 at 06:29:55AM -0700, Charles.Hou wrote: > > i have traced the size of the table. About 1000 new rows will be > inserted into the table in one day. each row has 300 bytes. > 1000*300/1024=293K, but the size of this table had been increased 3MB. > 3MB-293K=2.7MB...Why?where is the 2.7MB? How do you know what the size of the table is? You had at least two tables you were working on before. I think there must be something you're not communicating completely. (And why are you worried about less than 3 Meg anyway? Regular vacuum will leave some empty space around for new data, which means you don't have to go down to the filesystem to make the file bigger before you write it it. This is a Good Thing.) Please go back and run VACUUM VERBOSE on the table you killed the vacuum on before. A -- Andrew Sullivan | ajs@crankycanuck.ca However important originality may be in some fields, restraint and adherence to procedure emerge as the more significant virtues in a great many others. --Alain de Botton
On Jun 8, 10:20 pm, a...@crankycanuck.ca (Andrew Sullivan) wrote: > On Fri, Jun 08, 2007 at 06:29:55AM -0700, Charles.Hou wrote: > > > i have traced the size of the table. About 1000 new rows will be > > inserted into the table in one day. each row has 300 bytes. > > 1000*300/1024=293K, but the size of this table had been increased 3MB. > > 3MB-293K=2.7MB...Why?where is the 2.7MB? > > How do you know what the size of the table is? You had at least two > tables you were working on before. I think there must be something > you're not communicating completely. (And why are you worried about > less than 3 Meg anyway? Regular vacuum will leave some empty space > around for new data, which means you don't have to go down to the > filesystem to make the file bigger before you write it it. This is a > Good Thing.) > > Please go back and run VACUUM VERBOSE on the table you killed the > vacuum on before. > > A > > -- > Andrew Sullivan | a...@crankycanuck.ca > However important originality may be in some fields, restraint and > adherence to procedure emerge as the more significant virtues in a > great many others. --Alain de Botton > > ---------------------------(end of broadcast)--------------------------- > TIP 2: Don't 'kill -9' the postmaster because if the size of database have been increased 36MB in one day , then it will be 1G MB increased after 1 month. so i worry about the size. other strange thing, if i block all postgresql client and run vacuumdb, there will have about 100MB free space. > How do you know what the size of the table is? i got the relname from the pg_class and go to find the relname on the disk.
On Fri, Jun 08, 2007 at 08:20:36AM -0700, Charles.Hou wrote: > because if the size of database have been increased 36MB in one day , > then it will be 1G MB increased after 1 month. so i worry about the > size. other strange thing, if i block all postgresql client and run > vacuumdb, there will have about 100MB free space. Your conclusion of linear growth doesn't actually follow from your premises. In any case, it sounds to me like (1) you're not vacuuming often enough and (2) you have transactions open from the clients which are preventing vacuuming from being effective. Autovacuum will solve (1) now that you've enabled it, but you need to fix your existing bloat (which probably means either VACUUM FULL or CLUSTER and REINDEX). (2) is a bigger problem, which you need to address at the client. Also, I bet your free space map is too small. Increase it. I have the feeling, on the basis of this thread, that you need to spend some more time with the manual. A -- Andrew Sullivan | ajs@crankycanuck.ca In the future this spectacle of the middle classes shocking the avant- garde will probably become the textbook definition of Postmodernism. --Brad Holland