Обсуждение: Using Postgresql as application server
from last few months I am reading and searching for can postgresql used as application server? As postgresql supports many languages like pl/perl, pl/python etc, supports dblink like functions to connect to other postgresql servers and now features are in development to use external data. Postgresql works well on many operating systems and has a stable and good quality code. As many users are using plpython or plperl to work on many types of data and to implement logic that can be useful in web application management.
So i am thinking if I can use postgresql as web application server.
Few points that supports this opinion:
1. multiple languages support for stored procs and triggers.
2. can connect to other postgresql servers or databases
3. it is easy to manage stored procs or triggers than managing web application in other servers.
4. data and logic/processing can be separated. One postgresql can be used as application server and another as database.
5. stable, mature and open codebase.
I request to users and developers give your suggestions and opinions.
Waiting for your replies.
Thanks and regards,
Chaitanya Kulkarni
On Aug 13, 2011, at 11:57, c k <shreeseva.learning@gmail.com> wrote: > Dear Postgres users, > from last few months I am reading and searching for can postgresql used as application server? As postgresql supports manylanguages like pl/perl, pl/python etc, supports dblink like functions to connect to other postgresql servers and nowfeatures are in development to use external data. Postgresql works well on many operating systems and has a stable andgood quality code. As many users are using plpython or plperl to work on many types of data and to implement logic thatcan be useful in web application management. > So i am thinking if I can use postgresql as web application server. > Few points that supports this opinion: > 1. multiple languages support for stored procs and triggers. > 2. can connect to other postgresql servers or databases > 3. it is easy to manage stored procs or triggers than managing web application in other servers. > 4. data and logic/processing can be separated. One postgresql can be used as application server and another as database. > 5. stable, mature and open codebase. > > I request to users and developers give your suggestions and opinions. > Waiting for your replies. > > Thanks and regards, > > Chaitanya Kulkarni Code yourself a nice "hello world" application that can accessed by a web browser and outputs in HTML using only PostgreSQL. IF you can actually do that simple task you will then be in a better position to decide if such an architectureis worth expanding upon. The better question to ask is why wouldn't you want to use something like Tomcat or Apache+Programming Language? David J.
As sad@bestmx.ru replied above, I want to go through this way at least to test the application. What I am going to do is
use python for a simple test web server.
Use xml to define static content for each page.
Use posgtresql to generate dynamic content using stored procs.
static content of the page and dynamic content both will make final web page.
Use javascript for client UI.
For my application the most important part is generating dynamic content. Scaling, concurrency etc are not the issues at this time. We have to adopt govt. rules and change the business logic accordingly and it must be fast enough to save penalties.
Regards,
Chaitanya Kulkarni
c k wrote:from last few months I am reading and searching for can postgresql
used as application server?So i am thinking if I can use postgresql as web application server.I request to users and developers give your suggestions and opinions.
Waiting for your replies.--Yes! Yes And Yes, my friend!
You should!
i am using postgresql exactly as application server.
(stored procedures in PLpgsql)
at http://platzcart.com i managed to minimize server side scripting:
Stored procedures run entire "business logic" + PHP envelop results into HTML code.
and in the next project (yet under construction)
i managed to ELIMINATE server side scripting at all
(except captcha server, which is standalone (written in perl))
PLpgsql "business logic" again + client side JS is building presentation.
most important point to support you is
*This is the originally intended usage of a DBMS*
All the troubles, all the NoSQL whining caused by unnecessary middleware.
P.S.
i have held a report to NLUUG Spring Conference 2011
about platzcart.com implementation
it is located at http://platzcart.com/lib/platzcart.pdf
it clearly describes advantages of using postgresql as an application server.
and i am sure it explains the good practice (maybe the best one)
P.P.S.
now i am using nginx_http_postgres_module
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
c k wrote: > Yes, I know that I can not create a simple web application using only > postgresql because we need a web server to server the html content. u r wrong. u CAN! there is nginx_htttp_postgresql_module exactly to connect webserver directly to postgresql and to OUTPUT query result to a browser. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
I found ngx_postgres module.
Chaitanya Kulkarni
c k wrote:u r wrong.Yes, I know that I can not create a simple web application using only postgresql because we need a web server to server the html content.
u CAN!
there is nginx_htttp_postgresql_module
exactly to connect webserver directly to postgresql
and to OUTPUT query result to a browser.
> can u please give me it's link. > I found ngx_postgres module. >> there is nginx_htttp_postgresql_module >> exactly to connect webserver directly to postgresql >> and to OUTPUT query result to a browser. http://wiki.nginx.org/3rdPartyModules http://labs.frickle.com/nginx_ngx_postgres/ -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
On Sun, 14 Aug 2011, c k wrote: > I would like to generate content dynamically. I want minimum developers to > be required, simple and powerful security and administration, and most > importantly ability to respond to changes. > > For my application the most important part is generating dynamic content. Chaitanya, There are three major components of a database application: the dbms back end (postgres in this case), middleware (in the language of your choice), and the UI. The UI will use apache to display pages and accept user input. The middleware contains business logic and is the interface between the user and the back end. Each (scripting) language has application development frameworks. My partner (a highly experienced software engineer) is using Ruby on Rails for one of our projects. It is powerful but has a long learning curve and you need to do everything the Rails way. In python (my scripting language of choice) there's django, turbogears, pylon, and probably many more. Depending on the approach you select you may be able to meet all your desired attributes as above. What you might consider doing is list those requirements in order of importance and focus on meeting the most important ones. This may mean droping other preferences (e.g., minumum number of developers unless you have a long lead time for this project). Rich
> Dear Postgres users,
> from last few months I am reading and searching for can postgresql used as application server? As postgresql supports many languages like pl/perl, pl/python etc, supports dblink like functions to connect to other postgresql servers and now features are in development to use external data. Postgresql works well on many operating systems and has a stable and good quality code. As many users are using plpython or plperl to work on many types of data and to implement logic that can be useful in web application management.
> So i am thinking if I can use postgresql as web application server.
> Few points that supports this opinion:
> 1. multiple languages support for stored procs and triggers.
> 2. can connect to other postgresql servers or databases
> 3. it is easy to manage stored procs or triggers than managing web application in other servers.
> 4. data and logic/processing can be separated. One postgresql can be used as application server and another as database.
> 5. stable, mature and open codebase.
>
> I request to users and developers give your suggestions and opinions.
> Waiting for your replies.
>
> Thanks and regards,
>
> Chaitanya Kulkarni
No, PG has never, and will never, act as an application-server. Are you really sure you need a full-blown application-server?
If you're familiar with Scala (a language which runs on the JVM) I really recommend Lift (www.liftweb.net) with PostgreSQL. Definitely the best. Although Lift has a history of having a steep learning-curve I always favour maintainability, type-safety and robustness over RAD and time-to-market. Having said that I honestly think a skilled Lift team is at least 2x more productive than with any other web-based framework for building modern UI.
I'm about to post a Lift+JPA+Spring+PostgreSQL example soon which demonstrates how to use Lift with Spring+JPA+XA(2 phase commit, aka. distributed transactions). Follow the mailing-list and you'll see it there soon (I hope): http://groups.google.com/group/liftweb
--
Andreas Joseph Krogh <andreak@officenet.no>
Senior Software Developer / CTO
Public key: http://home.officenet.no/~andreak/public_key.asc
------------------------+---------------------------------------------+
OfficeNet AS | The most difficult thing in the world is to |
Rosenholmveien 25 | know how to do a thing and to watch |
1414 Trollåsen | somebody else doing it wrong, without |
NORWAY | comment. |
Org.nr: NO 981 479 076 | |
| |
Tlf: +47 24 15 38 90 | |
Fax: +47 24 15 38 91 | |
Mobile: +47 909 56 963 | |
------------------------+---------------------------------------------+
> can u please give me it's link. > I found ngx_postgres module. >> there is nginx_htttp_postgresql_module >> exactly to connect webserver directly to postgresql >> and to OUTPUT query result to a browser. http://wiki.nginx.org/3rdPartyModules http://labs.frickle.com/nginx_ngx_postgres/ -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
> can u please give me it's link. > I found ngx_postgres module. >> there is nginx_htttp_postgresql_module >> exactly to connect webserver directly to postgresql >> and to OUTPUT query result to a browser. http://wiki.nginx.org/3rdPartyModules http://labs.frickle.com/nginx_ngx_postgres/ -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
c k wrote: > Yes, I know that I can not create a simple web application using only > postgresql because we need a web server to server the html content. u r wrong. u CAN! there is nginx_htttp_postgresql_module exactly to connect webserver directly to postgresql and to OUTPUT query result to a browser. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
c k wrote: > Yes, I know that I can not create a simple web application using only > postgresql because we need a web server to server the html content. u r wrong. u CAN! there is nginx_htttp_postgresql_module exactly to connect webserver directly to postgresql and to OUTPUT query result to a browser. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Hi CK: First, it depends on what you mean by an application server. There are people who in fact do very similar things with PostgreSQL, essentially having it take on roles traditionally served by middleware. On Sat, Aug 13, 2011 at 8:57 AM, c k <shreeseva.learning@gmail.com> wrote: > Dear Postgres users, > from last few months I am reading and searching for can postgresql used as > application server? Kinda. > As postgresql supports many languages like pl/perl, > pl/python etc, supports dblink like functions to connect to other postgresql > servers and now features are in development to use external data. Postgresql > works well on many operating systems and has a stable and good quality code. > As many users are using plpython or plperl to work on many types of data and > to implement logic that can be useful in web application management. > So i am thinking if I can use postgresql as web application server. > Few points that supports this opinion: > 1. multiple languages support for stored procs and triggers. > 2. can connect to other postgresql servers or databases Best used sparingly. > 3. it is easy to manage stored procs or triggers than managing web > application in other servers. I agree, but.... > 4. data and logic/processing can be separated. One postgresql can be used as > application server and another as database. Bad idea. You don't really gain anything except complexity and headache by trying to separate like this. > 5. stable, mature and open codebase. Probably better than give some general feedback I will share how we do this with LedgerSMB. Some of our deployments using this approach are decent-sized. 1: Thus far all our stored procs are all in SQL and PLPGSQL. We do not currently use PL/Perl or PL/Python, or any other stored procedure language. 2: We pay close attention to semantics in order to ensure, to the extent possible, that catalog data allows for discovery of stored procedure interfaces. We then do a lot of query building in the "client" app (a web app) to discover these interfaces and call them properly. We do not include SQL code in most perl modules. Instead virtually all calls go through a generic discovery interface. 3: All logic required to store, retrieve, or present (to the application) the data goes through these stored procedures. 4: The web app is a fairly thin Perl glue that binds HTML templates written in Template Toolkit to these stored procedures. It's rare to find Perl functions more than about 5-10 lines long and when that happens most of the logic is usually taking data and putting it into a tabular structure for a report template. This could be done with a desktop app as well. The primary business logic and security is thus reusable between applications, making PostgreSQL essentially a middleware server. Observations from my experience: Design is critical and difficult. There aren't a lot of people doing this sort of thing and so there is a LOT of trial and error. Also, it is quite possible to do a heck of a lot in this area in SQL and PLPGSQL. Focus on writing unified, maintainable queries and semantically meaningful interfaces (argument names, function names, etc). Best Wishes, Chris Travers
> >Dear Postgres users, <snip> How about sending these to just one mailing list -- when you cross post everybody gets two copies of each response.
Hi, I've recently looked into the problem of my INSERTs throwing an ROW error, when a new row hits an already present one, by unique constraint. It triggers an expensive rollback, and I'd like to have it sort of "optimised". In my case, duplicates can be discarded on an attempt INSERT, but an UPDATE instead would also do. When I was looking for a solution, I found this: http://wiki.postgresql.org/wiki/SQL_MERGE Which would do nicely, but I understand postgresql does not have it, yet. On the other hand, I think that providing the OLD.* table for RULES and TRIGGERS on INSERT, for an application level programmer (like myself), could provide a simple way to overcome the missing feature, until it's fully implemented as MERGE statement according to SQL:2003. Such OLD.* table on INSERTS should contain a row from current table content, that matches unique constraints of a currently inserted row or nothing, if there is no collision. This way I could make a conditional RULE with "... WHERE exists(OLD.someting)..." instead of doing an explicit SELECT in that WHERE clausure, which I think is more expensive then referring a column already fetched by the engine. Possibly, this may pave the way to MERGE implementation?? As of today, no application level code can possibly expect a valid OLD.* table within ROLE/TRIGGER on INSERT - so no current code will be broken by this. Would it be possible to add this to a whishlist for 9.2 or something? Regards, -R
Hi,On Aug 13, 2011, at 11:57, c k <shreeseva.learning@gmail.com> wrote:Dear Postgres users, from last few months I am reading and searching for can postgresql used as application server? As postgresql supports many languages like pl/perl, pl/python etc, supports dblink like functions to connect to other postgresql servers and now features are in development to use external data. Postgresql works well on many operating systems and has a stable and good quality code. As many users are using plpython or plperl to work on many types of data and to implement logic that can be useful in web application management. So i am thinking if I can use postgresql as web application server. Few points that supports this opinion: 1. multiple languages support for stored procs and triggers. 2. can connect to other postgresql servers or databases 3. it is easy to manage stored procs or triggers than managing web application in other servers. 4. data and logic/processing can be separated. One postgresql can be used as application server and another as database. 5. stable, mature and open codebase. I request to users and developers give your suggestions and opinions. Waiting for your replies. Thanks and regards, Chaitanya KulkarniCode yourself a nice "hello world" application that can accessed by a web browser and outputs in HTML using only PostgreSQL. IF you can actually do that simple task you will then be in a better position to decide if such an architecture is worth expanding upon. The better question to ask is why wouldn't you want to use something like Tomcat or Apache+Programming Language? David J.
I see from other posts that you are using Python (A lovely language, but one that does not scale well for my own purposes – unfortunately) so this is most likely not relevant to your current situation.
However, for those people who would consider Java, then consider using JBoss 7, it is very fast and will support very sophisticated web front ends while allowing very powerful logic written in Java, and can easily be configured to use PostgreSQL.
The combination of JBoss http://www.jboss.org/overview.html and JEE (Java Enterprise Edition), is well suited to large projects that can benefit from multi-core processors and massive amounts of RAM. My own humble development machine has a mere 4 cores and 8GB, but production machines can easily use a terrabyte of memory and many more cores. Eclipse is an IDE that is well suited for developing applications in Java, and can be configured to develop applications for JBoss.
Cheers,
Gavin
I have written a script to install JBoss 7 (a lot easier than for JBoss 4 and earlier!) and convert it to use PostgreSQL 9.1 beta3. This I am using to revise my JEE knowledge and get to grips with version 6 of JEE.
I will give reply after some experiments.
Regards,
Chaitanya Kulkarni
On Aug 14 2011, Uwe Schroeder wrote:> can u please give me it's link.
> I found ngx_postgres module.
> >> there is nginx_htttp_postgresql_module
>> exactly to connect webserver directly to postgresql
>> and to OUTPUT query result to a browser.
http://wiki.nginx.org/3rdPartyModules
http://labs.frickle.com/nginx_ngx_postgres/DSTABASE IS THE BEST TOOL TO AGGANGE AN ACCESS CONTROL SCHEMEPersonally I'd never ever hook my database directly to the internet - for me that's like writing your PIN on your banking card and leave it next to the ATM :-)
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
On Sat, Aug 13, 2011 at 2:30 PM, Andreas Joseph Krogh <andreak@officenet.no> wrote: > No, PG has never, and will never, act as an application-server. Why in the world not? Now, it may or may not be a good idea but there is no technical constraint that prevents postgresql from being used in this fashion. I think it's a fine idea. Postgresql has certain features, in particular being able to implement functions in any language, that make it uniquely well suited among its peers to act as a application server platform. Having a quasi-functional front end to your middeware procedural code is very powerful as is having direct access to a local data store for caching and spooling purposes. So powerful that I would argue that if properly supported by tools it would be superior to many of the more classic stacks in use today. I could go down the list of reasons why that's the case -- tight coupling with data, performance, emphasis on transactional coding, etc. There are downsides too, but those could be mitigated with some thought and work. Postgres is not just a database -- it's a language hosting platform if you want to use it as such. Now, you can continue to do things as you've always done (database 'here', code 'here', web server 'here'), but why discourage people from trying out different things? merlin
På mandag 15. august 2011 kl 16:36:23 skrev du: > On Sat, Aug 13, 2011 at 2:30 PM, Andreas Joseph Krogh > <andreak@officenet.no> wrote: > > No, PG has never, and will never, act as an application-server. > > Why in the world not? Now, it may or may not be a good idea but there > is no technical constraint that prevents postgresql from being used in > this fashion. I think it's a fine idea. > > Postgresql has certain features, in particular being able to implement > functions in any language, that make it uniquely well suited among its > peers to act as a application server platform. Having a > quasi-functional front end to your middeware procedural code is very > powerful as is having direct access to a local data store for caching > and spooling purposes. So powerful that I would argue that if > properly supported by tools it would be superior to many of the more > classic stacks in use today. I could go down the list of reasons why > that's the case -- tight coupling with data, performance, emphasis on > transactional coding, etc. There are downsides too, but those could > be mitigated with some thought and work. > > Postgres is not just a database -- it's a language hosting platform if > you want to use it as such. Now, you can continue to do things as > you've always done (database 'here', code 'here', web server 'here'), > but why discourage people from trying out different things? > > merlin Sorry if I stepped on any toes here. But seriously - by my definition of app-server PG is not suited at all. I strongly woulddiscourage anyone from using any RDBMS as an app-server. *IMO* it makes development, testing, separation-of-consern,debugging and re-deployment a nightmare. PG has a lot of nice DB-features and I use it every day, but as an app-server - no way. -- Andreas Joseph Krogh <andreak@officenet.no> Senior Software Developer / CTO Public key: http://home.officenet.no/~andreak/public_key.asc ------------------------+---------------------------------------------+ OfficeNet AS | The most difficult thing in the world is to | Rosenholmveien 25 | know how to do a thing and to watch | 1414 Trollåsen | somebody else doing it wrong, without | NORWAY | comment. | Org.nr: NO 981 479 076 | | | | Tlf: +47 24 15 38 90 | | Fax: +47 24 15 38 91 | | Mobile: +47 909 56 963 | | ------------------------+---------------------------------------------+
On Sat, Aug 13, 2011 at 9:57 AM, c k <shreeseva.learning@gmail.com> wrote: > Dear Postgres users, > from last few months I am reading and searching for can postgresql used as > application server? As postgresql supports many languages like pl/perl, Besides the previously mentioned nginx module there's apache's mod libpq http://asmith.id.au/mod_libpq.html But I'd stick to a language to wrap stuff in like php etc.
On Mon, Aug 15, 2011 at 7:50 AM, Andreas Joseph Krogh <andreak@officenet.no> wrote: > Sorry if I stepped on any toes here. But seriously - by my definition of app-server PG is not suited at all. I stronglywould discourage anyone from using any RDBMS as an app-server. *IMO* it makes development, testing, separation-of-consern,debugging and re-deployment a nightmare. > > PG has a lot of nice DB-features and I use it every day, but as an app-server - no way. > The key issue here IMO is what you really want in an app server, which is why my answer is "kinda." PostgreSQL can do, and do well, a strong subset of what an app-server does. It is not a reasonable replacement for a standard middleware solution with all the bells and whistles, however. This being said, you could build an app server around PostgreSQL where Pg is the entry point of the app server without a whole lot of additional problems. Want to send an email? Write the data to a queue table and NOTIFY...... Let another process pick it up and handle it. And the approach you have to take is different from a traditional app server approach, largely because of the separation of concerns issues you flag. In general, as with any multi-tiered application (and several tiers can exist within PostgreSQL quite well), separation of concerns is a critical aspect of design. I'd suggest that PostgreSQL can do at least half of what folks usually expect from middleware by itself and do it well, without other processes listening for notifications. For example, it can accept business logic calls, take the data presented, manipulate (and store/retrieve), and send the updated data back to the client, all while enforcing security and data integrity constraints. Practical experience suggests that this is a solid majority of what a middleware program is actually doing. And it can do things better because you don't really have to deal with all the ORM stuff that usually ends up in that environment. If you want to do stuff beyond that, it is really best to either do it before hitting the database (through the client or some lightweight middleware wrapper), or after (via helper programs and NOTIFY/LISTEN commands). I think using PostgreSQL as a business logic entry point is not necessarily a bad idea, provided the application is well designed. The idea of using it to connect to other databases however, is generally best avoided (though there are some cases where it is impossible to do otherwise). Best Wishes, Chris Travers
On Monday 15. August 2011 16.36.23 Merlin Moncure wrote: > Postgres is not just a database -- it's a language hosting platform if > you want to use it as such. Now, you can continue to do things as > you've always done (database 'here', code 'here', web server 'here'), > but why discourage people from trying out different things? Somebody's probably going to do it -- for no other reason why than because you can. Sometimes I'll write functions like CREATE OR REPLACE FUNCTION dpp(INTEGER) RETURNS SETOF TEXT AS $$ SELECT '<p class="packed">' || ss_link_expand(source_text) || '</p>' FROM sources WHERE parent_id=$1 ORDER BY sort_order $$ LANGUAGE SQL STABLE; for dumping thext that I'll copy and paste right into a static Web page. It's a lot easier to do this in psql than a lot of other methods that I can think of. BTW, the mentioned ss_link_expand() function will generate hyperlinks on the fly from a compact format stored in the database. The concept is explained here: <http://solumslekt.org/blog/?p=151> I'm working with Postgres and PHP in tandem, and frequently write functions in sql or pl/pgsql that will output text directly in HTML format, mostly because I've found text transformation (particularly regexp_replace) in Postgres to be far superior to the equivalent methods of doing it in PHP. Leif
Further to Scott's comment, we are running our application platform on nginx/php (using php-fpm). It scales very well and it is extremely fast. When running under Apache, we had to constantly restart the apache service because it could not handle the load (at 150 concurrent users hitting the app it would pretty much grind to a standstill). We moved to Nginx, have quadrupled the number of devices which are accessing our platform, and the number of php processes which are actually running is < 10. I had allocated a dynamic pool whereby we have 35 processes ready to accept requests at all times and which can dynamically scale up, but it does not come close to needing to use all of the existing processes. Everything is running under FreeBSD 8.1 amd64. As Scott mentioned, wrapping it into a real language is much better - the extent of what you will be able to do within the module will be severely limited vs a real language. > -----Original Message----- > From: pgsql-admin-owner@postgresql.org [mailto:pgsql-admin- > owner@postgresql.org] On Behalf Of Scott Marlowe > Sent: Monday, August 15, 2011 9:54 AM > To: c k > Cc: pgsql-general@postgresql.org; pgsql-admin > Subject: Re: [ADMIN] Using Postgresql as application server > > On Sat, Aug 13, 2011 at 9:57 AM, c k <shreeseva.learning@gmail.com> > wrote: > > Dear Postgres users, > > from last few months I am reading and searching for can postgresql > used as > > application server? As postgresql supports many languages like > pl/perl, > > Besides the previously mentioned nginx module there's apache's mod > libpq http://asmith.id.au/mod_libpq.html > > But I'd stick to a language to wrap stuff in like php etc. > > -- > Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-admin
Scott Marlowe пишет: > On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> wrote: >> Dear Postgres users, >> from last few months I am reading and searching for can postgresql used as >> application server? As postgresql supports many languages like pl/perl, > Besides the previously mentioned nginx module there's apache's mod > libpq http://asmith.id.au/mod_libpq.html > > But I'd stick to a language to wrap stuff in like php etc. BTW, string concatenation in postgresql (plpgsql) is FASTER than in PHP -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Scott Marlowe пишет: > On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> wrote: >> Dear Postgres users, >> from last few months I am reading and searching for can postgresql used as >> application server? As postgresql supports many languages like pl/perl, > Besides the previously mentioned nginx module there's apache's mod > libpq http://asmith.id.au/mod_libpq.html > > But I'd stick to a language to wrap stuff in like php etc. BTW, string concatenation in postgresql (plpgsql) is FASTER than in PHP -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Scott Marlowe пишет: > On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> wrote: >> Dear Postgres users, >> from last few months I am reading and searching for can postgresql used as >> application server? As postgresql supports many languages like pl/perl, > Besides the previously mentioned nginx module there's apache's mod > libpq http://asmith.id.au/mod_libpq.html > > But I'd stick to a language to wrap stuff in like php etc. BTW, string concatenation in postgresql (plpgsql) is FASTER than in PHP -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
On Mon, Aug 15, 2011 at 9:50 AM, Andreas Joseph Krogh <andreak@officenet.no> wrote: > På mandag 15. august 2011 kl 16:36:23 skrev du: >> On Sat, Aug 13, 2011 at 2:30 PM, Andreas Joseph Krogh >> <andreak@officenet.no> wrote: >> > No, PG has never, and will never, act as an application-server. >> >> Why in the world not? Now, it may or may not be a good idea but there >> is no technical constraint that prevents postgresql from being used in >> this fashion. I think it's a fine idea. >> >> Postgresql has certain features, in particular being able to implement >> functions in any language, that make it uniquely well suited among its >> peers to act as a application server platform. Having a >> quasi-functional front end to your middeware procedural code is very >> powerful as is having direct access to a local data store for caching >> and spooling purposes. So powerful that I would argue that if >> properly supported by tools it would be superior to many of the more >> classic stacks in use today. I could go down the list of reasons why >> that's the case -- tight coupling with data, performance, emphasis on >> transactional coding, etc. There are downsides too, but those could >> be mitigated with some thought and work. >> >> Postgres is not just a database -- it's a language hosting platform if >> you want to use it as such. Now, you can continue to do things as >> you've always done (database 'here', code 'here', web server 'here'), >> but why discourage people from trying out different things? >> >> merlin > > Sorry if I stepped on any toes here. But seriously - by my definition of app-server PG is not suited at all. I stronglywould discourage anyone from using any RDBMS as an app-server. *IMO* it makes development, testing, separation-of-consern,debugging and re-deployment a nightmare. > > PG has a lot of nice DB-features and I use it every day, but as an app-server - no way. Nobody's toes are getting stepped on. Constructive debate gets people talking and puts the issues on the table. Your complaints are pretty non-specific (development is a 'nightmare'...why?). This topic reminds me a lot of the 'is putting logic in functions a good idea?' debate which tends to bring out strong opinions which are light on substance. As I've said many times this debate gets really interesting in the specific case of PostgreSQL because the database is so extensible with good abstractions in the right places. An application server from my point of view is just a server that sits between the UI (typically a browser these days) and the database. This server manages various temporary structures like application state and data spools coming from the 'persistence later' (the database). Due to heavy emphasis on the importance of putting most code into this layer from companies like IBM and Microsoft, we've seen an explosion of tools and libraries that can be hooked in here so that, if you choose to do so, you can drive the entire application stack such that the database and the browser are just appendages of this system. This path is rich, well traveled, and boring. But are we things really being done in the right way? I obviously don't think so...in fact I think the typical corporate IT programming dept is a total disaster. Many programmers, especially those who really get SQL and the relational model tend to resist programming in this fashion because they feel that it's over-complex, verbose (especially in java-land), and brittle. SQL provides a lot of strong guarantees that are relatively easy for even a junior programmer to take advantage of like serializability of action and automatic rollback to a known state due to an unhandled error. Programming around the lack of these features could charitably be described as creating a strong justification for the excellent debugging tools you are mentioning. merlin
c k wrote: > Dear Postgres users, > from last few months I am reading and searching for can postgresql used > as application server? As postgresql supports many languages like > pl/perl, pl/python etc, supports dblink like functions to connect to > other postgresql servers and now features are in development to use > external data. Postgresql works well on many operating systems and has a > stable and good quality code. As many users are using plpython or plperl > to work on many types of data and to implement logic that can be useful > in web application management. > So i am thinking if I can use postgresql as web application server. > Few points that supports this opinion: > 1. multiple languages support for stored procs and triggers. > 2. can connect to other postgresql servers or databases > 3. it is easy to manage stored procs or triggers than managing web > application in other servers. > 4. data and logic/processing can be separated. One postgresql can be > used as application server and another as database. > 5. stable, mature and open codebase. > > I request to users and developers give your suggestions and opinions. > Waiting for your replies. > > Thanks and regards, > > Chaitanya Kulkarni I believe that it is ideal for Postgres to be computationally complete in that one *could* use it to implement a complete application. That isn't to say one should do this as a matter of course, good to use appropriate tools for a job, but that it should at least be possible if one wanted to. -- Darren Duncan
On Mon, Aug 15, 2011 at 1:44 PM, Darren Duncan <darren@darrenduncan.net> wrote: > I believe that it is ideal for Postgres to be computationally complete in > that one *could* use it to implement a complete application. That isn't to > say one should do this as a matter of course, good to use appropriate tools > for a job, but that it should at least be possible if one wanted to. -- I think the limit is actually transactional control. So, suppose I manage inventory.... I want an email to go out to the ordering manager when the quantity I have of an item drops below the re-order point. I also want this email NOT to go out if the transaction rolls back. (Wait, the order of 50000 widgets I just processed rolled back because it isn't to a valid customer! We normally only sell 50000 per year anyway. No need for the email.) 1) I don't see how this is possible directly from within PostgreSQL 2) Given the obvious ways around this, I don't see why this is desirable to add to PostgreSQL..... Best Wishes, Chris Travers
On Mon, Aug 15, 2011 at 3:50 PM, Chris Travers <chris.travers@gmail.com> wrote: > On Mon, Aug 15, 2011 at 1:44 PM, Darren Duncan <darren@darrenduncan.net> wrote: > >> I believe that it is ideal for Postgres to be computationally complete in >> that one *could* use it to implement a complete application. That isn't to >> say one should do this as a matter of course, good to use appropriate tools >> for a job, but that it should at least be possible if one wanted to. -- > > I think the limit is actually transactional control. 100% agree. until we get stored procedures with manual transaction control you have to involve an external agent. for web serving backend though this isn't a big deal because you have to involve the browser anyways and your request liftetime is generally short enough to keep in a database transaction. > So, suppose I manage inventory.... > > I want an email to go out to the ordering manager when the quantity I > have of an item drops below the re-order point. I also want this > email NOT to go out if the transaction rolls back. (Wait, the order > of 50000 widgets I just processed rolled back because it isn't to a > valid customer! We normally only sell 50000 per year anyway. No need > for the email.) > > 1) I don't see how this is possible directly from within PostgreSQL > 2) Given the obvious ways around this, I don't see why this is > desirable to add to PostgreSQL..... With a stored procedure, you could just listen for notifications (which are not delivered if the transaction rolls back) indefinitely and/or monitor a queue table. In lieu of that, just cron it up. merlin
Chris Travers wrote: > On Mon, Aug 15, 2011 at 1:44 PM, Darren Duncan <darren@darrenduncan.net> wrote: >> I believe that it is ideal for Postgres to be computationally complete in >> that one *could* use it to implement a complete application. That isn't to >> say one should do this as a matter of course, good to use appropriate tools >> for a job, but that it should at least be possible if one wanted to. -- > > I think the limit is actually transactional control. > > So, suppose I manage inventory.... > > I want an email to go out to the ordering manager when the quantity I > have of an item drops below the re-order point. I also want this > email NOT to go out if the transaction rolls back. (Wait, the order > of 50000 widgets I just processed rolled back because it isn't to a > valid customer! We normally only sell 50000 per year anyway. No need > for the email.) > > 1) I don't see how this is possible directly from within PostgreSQL > 2) Given the obvious ways around this, I don't see why this is > desirable to add to PostgreSQL..... I have 2 answers to this: 1. Try using multiple processes. You can have a separate process, in a distinct transactional context, for sending the emails, and it only does so under certain conditions, such as if it sees that a committed change has put the processed change over 50K. The process performing the order that might get rolled back wouldn't send the email itself, though it might explicitly notify the other process, if the other can't tell that something happened by itself. Its not like everything has to be a single process. 2. The computational completeness I'm speaking of what I see as an ideal here. Some of that computational completeness may currently be missing but could be added later. Or it may already exist. -- Darren Duncan
On Mon, Aug 15, 2011 at 2:05 PM, Darren Duncan <darren@darrenduncan.net> wrote: > 1. Try using multiple processes. You can have a separate process, in a > distinct transactional context, for sending the emails, and it only does so > under certain conditions, such as if it sees that a committed change has put > the processed change over 50K. The process performing the order that might > get rolled back wouldn't send the email itself, though it might explicitly > notify the other process, if the other can't tell that something happened by > itself. Its not like everything has to be a single process. This is the "obvious workaround" and not directly from within PostgreSQL. The helper process cannot be a Pg function because it can't commit it's own deletion from the queue. > > 2. The computational completeness I'm speaking of what I see as an ideal > here. Some of that computational completeness may currently be missing but > could be added later. Or it may already exist. But given the ability to do all this through helper processes and LISTEN/NOTIFY, is that even really needed? If so, why? Just because PostgreSQL may not be computationally complete in this regard, that doesn't mean that it can't be part of a computationally complete system, right? Best Wishes, Chris Travers
>> >>I believe that it is ideal for Postgres to be computationally complete in that one *could* use it to implement a completeapplication. That isn't to say one should do this as a matter of course, good to use appropriate tools for a >>job,but that it should at least be possible if one wanted to. -- Darren Duncan So who wants to fund the effort to create the necessary infrastructure to display a programmer-defined user interface screen(think of the "Forms" module in Microsoft Access)? Or are you expecting the end-user to open up PgAdmin and type "SELECThello_world();". I would argue that because PostgreSQL is able to talk with many languages that can create these"Forms" (or even - through extensions - a web-browser) that such functionality is NOT DESIREABLE and thus PostgreSQLwould not ideally be "computationally complete" by that definition. David J.
On Mon, Aug 15, 2011 at 4:54 PM, David Johnston <polobo@yahoo.com> wrote: >>> >>>I believe that it is ideal for Postgres to be computationally complete in that one *could* use it to implement a completeapplication. That isn't to say one should do this as a matter of course, good to use appropriate tools for a >>job,but that it should at least be possible if one wanted to. -- Darren Duncan > > So who wants to fund the effort to create the necessary infrastructure to display a programmer-defined user interface screen(think of the "Forms" module in Microsoft Access)? Or are you expecting the end-user to open up PgAdmin and type "SELECThello_world();". I would argue that because PostgreSQL is able to talk with many languages that can create these"Forms" (or even - through extensions - a web-browser) that such functionality is NOT DESIREABLE and thus PostgreSQLwould not ideally be "computationally complete" by that definition. I've been thinking for a while about doing this: describe the interface in tables along with some rendering code which directly accepts the request and spits out some html. I'd be gunning for something with the speed and ease of development of delphi, which fwict has never been reproduced. So you'd have a 'form' table which you could then bind to a real table via an adapter or some automatic scaffolding which renders the entry based on the structure of the table. Sitting in front of the database would probably be an ultra thin web server like node.js which just grabs the request and hands it to the database through a simple connection pool. merlin
On Mon, Aug 15, 2011 at 11:33 AM, sad@bestmx.ru <sad@bestmx.ru> wrote: > Scott Marlowe пишет: >> >> On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> wrote: >>> >>> Dear Postgres users, >>> from last few months I am reading and searching for can postgresql used >>> as >>> application server? As postgresql supports many languages like pl/perl, >> >> Besides the previously mentioned nginx module there's apache's mod >> libpq http://asmith.id.au/mod_libpq.html >> >> But I'd stick to a language to wrap stuff in like php etc. > > BTW, string concatenation in postgresql (plpgsql) is FASTER than in PHP But I can throw 1,000 cores at a large load with php. Much harder to do with plpgsql.
-----Original Message----- From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Merlin Moncure Sent: Monday, August 15, 2011 6:10 PM To: David Johnston Cc: Darren Duncan; pgsql-general@postgresql.org Subject: Re: [GENERAL] Using Postgresql as application server On Mon, Aug 15, 2011 at 4:54 PM, David Johnston <polobo@yahoo.com> wrote: >>> >>>I believe that it is ideal for Postgres to be computationally >>>complete in that one *could* use it to implement a complete >>>application. That isn't to say one should do this as a matter of >>>course, good to use appropriate tools for a >>job, but that it should >>>at least be possible if one wanted to. -- Darren Duncan > > So who wants to fund the effort to create the necessary infrastructure to display a programmer-defined user interface screen (think of the "Forms" module in Microsoft Access)? Or are you expecting the end-user to open up PgAdmin and type "SELECT hello_world();". I would argue that because PostgreSQL is able to talk with many languages that can create these "Forms" (or even - through extensions - a web-browser) that such functionality is NOT DESIREABLE and thus PostgreSQL would not ideally be "computationally complete" by that definition. I've been thinking for a while about doing this: describe the interface in tables along with some rendering code which directly accepts the request and spits out some html. I'd be gunning for something with the speed and ease of development of delphi, which fwict has never been reproduced. So you'd have a 'form' table which you could then bind to a real table via an adapter or some automatic scaffolding which renders the entry based on the structure of the table. Sitting in front of the database would probably be an ultra thin web server like node.js which just grabs the request and hands it to the database through a simple connection pool. merlin -------------------------------------------------- A more powerful (and complex) solution would be to integrate some form of "Templating" solution (e.g., Velocity) and require that the interface definitions be templates that generate valid HTML upon processing (since your solution already requires an external web browser anyway). Hstore and Cursors (with a named composite type) could be used to feed data into the template. David J.
David Johnston wrote: >>> I believe that it is ideal for Postgres to be computationally complete in >>> that one *could* use it to implement a complete application. That isn't >>> to say one should do this as a matter of course, good to use appropriate >>> tools for a >>job, but that it should at least be possible if one wanted >>> to. -- Darren Duncan > > So who wants to fund the effort to create the necessary infrastructure to > display a programmer-defined user interface screen (think of the "Forms" > module in Microsoft Access)? Or are you expecting the end-user to open up > PgAdmin and type "SELECT hello_world();". I would argue that because > PostgreSQL is able to talk with many languages that can create these "Forms" > (or even - through extensions - a web-browser) that such functionality is NOT > DESIREABLE and thus PostgreSQL would not ideally be "computationally > complete" by that definition. I mean computationally complete in that Postgres is an application-level virtual machine within which it is technically possible to write an emulator for any given computationally complete language using just stock Postgres and stock PLs, such that say you can just feed a self-contained script to psql and that this script is an application capable of doing anything. I'm not saying that it has to perform well but just be possible. I certainly don't expect any interface-related higher level libraries from this effort, but the foundation should be there so users can create their own just by writing an installable Postgres extension consisting of PL procedures etc that don't need a C compiler. I believe we basically have all the foundation already, with maybe procedures executable outside transactions being the last major part. -- Darren Duncan
On Mon, Aug 15, 2011 at 3:47 PM, Darren Duncan <darren@darrenduncan.net> wrote: > > I believe we basically have all the foundation already, with maybe > procedures executable outside transactions being the last major part. > Why is this desirable? Why is it more desirable than actually using the listen/notify infrastructure that exists already? Best Wishes, Chris Travers
On 15/08/2011 10:36 PM, Merlin Moncure wrote: > On Sat, Aug 13, 2011 at 2:30 PM, Andreas Joseph Krogh > <andreak@officenet.no> wrote: >> No, PG has never, and will never, act as an application-server. > Why in the world not? The biggest reason is safety. Beyond that, the lack of autonomous transactions, stored procedures, in-DB timers, and support for any protocol other than the native Pg database query protocol mean it's also rather impractical. I guess theoretically one could embed a JVM / Python instance / whatever in the postmaster and have it spawn new backends for incoming connections with other protocols. But ... why? Why add all that complexity and - more importantly - contaiminate PostgreSQL's address space with more code that can fail when you don't have to? PLs and user C procedures are already risk enough as far as I'm concerned. I *like* the DB being standalone. It's safer from bad code scribbling garbage across the heap and corrupting shared memory. What do you gain by trying to make Pg play appserver too? Now, I have nothing against keeping business logic in the DB. This makes a lot of sense to me, though it'd be better if it were easier to version DB changes. What I don't get is having the client access front-end in the DB too ... that just seems like asking for trouble and security problems. > Postgresql has certain features, in particular being able to implement > functions in any language, that make it uniquely well suited among its > peers to act as a application server platform. Actually, I'd say Oracle's really solid Java support, in-db timers, and autonomous transactions makes it rather better suited. -- Craig Ringer
-----Original Message----- From: pgsql-general-owner@postgresql.org [mailto:pgsql-general-owner@postgresql.org] On Behalf Of Darren Duncan Sent: Monday, August 15, 2011 6:48 PM To: David Johnston Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] Using Postgresql as application server David Johnston wrote: >>> I believe that it is ideal for Postgres to be computationally >>> complete in that one *could* use it to implement a complete >>> application. That isn't to say one should do this as a matter of >>> course, good to use appropriate tools for a >>job, but that it >>> should at least be possible if one wanted to. -- Darren Duncan > > So who wants to fund the effort to create the necessary infrastructure > to display a programmer-defined user interface screen (think of the "Forms" > module in Microsoft Access)? Or are you expecting the end-user to > open up PgAdmin and type "SELECT hello_world();". I would argue that > because PostgreSQL is able to talk with many languages that can create these "Forms" > (or even - through extensions - a web-browser) that such functionality > is NOT DESIREABLE and thus PostgreSQL would not ideally be > "computationally complete" by that definition. I mean computationally complete in that Postgres is an application-level virtual machine within which it is technically possibleto write an emulator for any given computationally complete language using just stock Postgres and stock PLs, suchthat say you can just feed a self-contained script to psql and that this script is an application capable of doing anything. I'm not saying that it has to perform well but just be possible. I certainly don't expect any interface-related higher level libraries from this effort, but the foundation should be thereso users can create their own just by writing an installable Postgres extension consisting of PL procedures etc thatdon't need a C compiler. I believe we basically have all the foundation already, with maybe procedures executable outside transactions being the lastmajor part. -- Darren Duncan -------------------------------------------------------------------------------------------- This whole line is getting somewhat off-topic; we're not talking about a "computationally complete" application but simplyone that can handle HTTP requests and dispatch calls to user-defined methods. This seems like a small-enough requirementand seems to already have a solution (though I haven't looked at the provided links); whether you call it PostgreSQLor not is a matter of semantics (as is much of this thead). That said... Please restate your request in terms of benefits as opposed to checklist of cool features that barely work but, because theyare present, can be added to the marketing materials. I rather cater to a user that is capable and willing to choose multiple, separate, best-of-breed applications as opposedto those that want "Microsoft Access" on steroids. Given limited resources even software obeys the axiom "Jack ofAll Trades; Master of None". If PostgreSQL was actually getting money for each user then maybe a focus on adding additional "user experience" featureswould be warranted; as it is the effort should be in making the Free Core as good as possible and encourage and assista community to build tools (core augments and developer). This is opposed to the "do everything ourselves" mentalitythat you seem to prescribe. While this community exists now my personal impression is that more efforts could betaken to make it more prominent and accessible to users. Think of it as giving new users the course outline and "requiredreading" documents and then saying that they can spend the time to teach themselves the fundamentals or direct themto capable teachers who can help guide them. Likely these teachers would be willing to provide the outline and othermaterials for the opportunity to gain new students. PostgreSQL could then serve as an unbiased moderator; attemptingto make sure that there is a variety of philosophies represented as well as to review the accuracy/tone of thematerials that they certify. David J.
On Mon, Aug 15, 2011 at 7:06 PM, Craig Ringer <ringerc@ringerc.id.au> wrote: > On 15/08/2011 10:36 PM, Merlin Moncure wrote: >> >> On Sat, Aug 13, 2011 at 2:30 PM, Andreas Joseph Krogh >> <andreak@officenet.no> wrote: >>> >>> No, PG has never, and will never, act as an application-server. >> >> Why in the world not? > > The biggest reason is safety. Beyond that, the lack of autonomous > transactions, stored procedures, in-DB timers, and support for any protocol > other than the native Pg database query protocol mean it's also rather > impractical. well, autonomous transactions can be kludged. beyond that though, I agree on the sproc point but not the protocol. it's pretty trivial to frontend the database with a http understanding server which converts that to libpq representation (node.js gets the nod from that point of view). > I guess theoretically one could embed a JVM / Python instance / whatever in > the postmaster and have it spawn new backends for incoming connections with > other protocols. But ... why? Why add all that complexity and - more > importantly - contaiminate PostgreSQL's address space with more code that > can fail when you don't have to? PLs and user C procedures are already risk > enough as far as I'm concerned. I don't see what's so difficult about putting thin protocol server in front of the database. In terms of security, good coding practices in languages like plpgsql are well understood and there is no reason to believe they are any more or less secure than the middleware as long as good coding practices are followed.. > I *like* the DB being standalone. It's safer from bad code scribbling > garbage across the heap and corrupting shared memory. > What do you gain by trying to make Pg play appserver too? Simple. You eliminate the layer that traditionally sucks down most of the coding effort and presents most of the bugs. SQL errors and (especially) transaction state are first class, so that you can just let the sql engine clean everything up for you when you have an error. If your application is small enough so that you don't have to break application serving into a separate machine, you can eliminate moving data through the protocol and all the application managed caches that plague traditional stacks. You get to code in a more declarative functional environment that has (in my humble experience) absolutely proven itself in terms of defect rates. You get to reduce your LOC by 50-80%. There are downsides too -- you lose access to the excellent middleware tools out there, and you are 'stuck' on postgres and need to come up with hard to find and expensive postgres talent. You need to be prepared to blaze a path, etc etc. merlin
On Mon, Aug 15, 2011 at 5:14 PM, David Johnston <polobo@yahoo.com> wrote: >> This whole line is getting somewhat off-topic; we're not talking about a "computationally complete" application but simplyone that can handle HTTP requests and dispatch calls to user-defined methods. This seems like a small-enough requirementand seems to already have a solution (though I haven't looked at the provided links); whether you call it PostgreSQLor not is a matter of semantics (as is much of this thead).<< Which PostgreSQL *can* do, but probably shouldn't. >> Please restate your request in terms of benefits as opposed to checklist of cool features that barely work but, becausethey are present, can be added to the marketing materials.<< The whole thing is: Pg as an app server occupies a *different* role than a traditional app server. Not better or worse, just different. Think of it as a database program which can also operate as a message queue once operations complete successfully and committed. That's a pretty powerful thing with listen/notify. And you can process data in additional PL's as well, accessing methods from other languages so you don't have to reinvent the wheel. It is my view that PostgreSQL makes an *excellent* application server as long as you use it intelligently. That means treating the database as the *central* application and then using it to manage not only main data storage, but also message queues to other helper processes which can, for example, send email, print documents, or whatever. This is the direction LedgerSMB is moving: using Pg to the max but doing so carefully and intelligently. It is not a traditional application server, but it can easily replace a traditional application server along with helper programs in whatever languages one wants to write them in. The power and flexibility is really quite amazing. Best Wishes, Chris Travers
Chris Travers wrote: > On Mon, Aug 15, 2011 at 3:47 PM, Darren Duncan <darren@darrenduncan.net> wrote: >> I believe we basically have all the foundation already, with maybe >> procedures executable outside transactions being the last major part. >> > Why is this desirable? Why is it more desirable than actually using > the listen/notify infrastructure that exists already? Maybe listen/notify is sufficient by itself. I withdraw my "procedures executable outside transactions" comment for now, and just bring it up later if I can think of a specific use case that other mechanisms don't satisfy. -- Darren Duncan
On Mon, Aug 15, 2011 at 7:23 PM, Merlin Moncure <mmoncure@gmail.com> wrote: > There are downsides too -- you lose access to the excellent middleware > tools out there, and you are 'stuck' on postgres and need to come up > with hard to find and expensive postgres talent. You need to be > prepared to blaze a path, etc etc. Yep. Also, it's REAL easy to stick a caching layer like memcached into the middle tier app layer, but nearly impossible to do so in pgsql. For large systems, this would make pg as an app server a nogo. But for small to medium sized systems that don't need caching it could work out.
On Mon, Aug 15, 2011 at 10:05 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > > Yep. Also, it's REAL easy to stick a caching layer like memcached > into the middle tier app layer, but nearly impossible to do so in > pgsql. For large systems, this would make pg as an app server a nogo. > But for small to medium sized systems that don't need caching it > could work out. > The other big scalability limitation for Pg as an app server is you really can't do connection pooling. Best Wishes, Chris Travers
Scott Marlowe пишет: > On Mon, Aug 15, 2011 at 11:33 AM, sad@bestmx.ru<sad@bestmx.ru> wrote: >> Scott Marlowe пишет: >>> On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> wrote: >>>> Dear Postgres users, >>>> from last few months I am reading and searching for can postgresql used >>>> as >>>> application server? As postgresql supports many languages like pl/perl, >>> Besides the previously mentioned nginx module there's apache's mod >>> libpq http://asmith.id.au/mod_libpq.html >>> >>> But I'd stick to a language to wrap stuff in like php etc. >> BTW, string concatenation in postgresql (plpgsql) is FASTER than in PHP > But I can throw 1,000 cores at a large load with php. Much harder to > do with plpgsql. and? all of them would inevitably connect the same postgresql the solution is outside the scope. ...a web-server page cache is one element of a solution -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Scott Marlowe пишет: > On Mon, Aug 15, 2011 at 11:33 AM, sad@bestmx.ru<sad@bestmx.ru> wrote: >> Scott Marlowe пишет: >>> On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> wrote: >>>> Dear Postgres users, >>>> from last few months I am reading and searching for can postgresql used >>>> as >>>> application server? As postgresql supports many languages like pl/perl, >>> Besides the previously mentioned nginx module there's apache's mod >>> libpq http://asmith.id.au/mod_libpq.html >>> >>> But I'd stick to a language to wrap stuff in like php etc. >> BTW, string concatenation in postgresql (plpgsql) is FASTER than in PHP > But I can throw 1,000 cores at a large load with php. Much harder to > do with plpgsql. and? all of them would inevitably connect the same postgresql the solution is outside the scope. ...a web-server page cache is one element of a solution -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Scott Marlowe пишет: > On Mon, Aug 15, 2011 at 11:33 AM, sad@bestmx.ru<sad@bestmx.ru> wrote: >> Scott Marlowe пишет: >>> On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> wrote: >>>> Dear Postgres users, >>>> from last few months I am reading and searching for can postgresql used >>>> as >>>> application server? As postgresql supports many languages like pl/perl, >>> Besides the previously mentioned nginx module there's apache's mod >>> libpq http://asmith.id.au/mod_libpq.html >>> >>> But I'd stick to a language to wrap stuff in like php etc. >> BTW, string concatenation in postgresql (plpgsql) is FASTER than in PHP > But I can throw 1,000 cores at a large load with php. Much harder to > do with plpgsql. and? all of them would inevitably connect the same postgresql the solution is outside the scope. ...a web-server page cache is one element of a solution -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
On Aug 13, 2011, at 2:44 PM, sad@bestmx.ru wrote: > c k wrote: >> Yes, I know that I can not create a simple web application using only postgresql because we need a web server to serverthe html content. > u r wrong. > u CAN! > > there is nginx_htttp_postgresql_module > exactly to connect webserver directly to postgresql > and to OUTPUT query result to a browser. > You could also use something like node.js which allows you to connect Javascript directly to Postgres using the front end/backend protocol. With something like this, you could talk directly to Postgres from the browser: http://ajaxian.com/archives/tcpsocket-sockets-in-the-browser http://www.postgresql.org/docs/9.0/interactive/protocol.html I'm not saying this is the right approach, but if that is what you are really after... John DeSoi, Ph.D.
2011/8/16 sad@bestmx.ru <sad@bestmx.ru>: > Scott Marlowe пишет: >> >> On Mon, Aug 15, 2011 at 11:33 AM, sad@bestmx.ru<sad@bestmx.ru> wrote: >>> >>> Scott Marlowe пишет: >>>> >>>> On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> >>>> wrote: >>>>> >>>>> Dear Postgres users, >>>>> from last few months I am reading and searching for can postgresql used >>>>> as >>>>> application server? As postgresql supports many languages like pl/perl, >>>> >>>> Besides the previously mentioned nginx module there's apache's mod >>>> libpq http://asmith.id.au/mod_libpq.html >>>> >>>> But I'd stick to a language to wrap stuff in like php etc. >>> >>> BTW, string concatenation in postgresql (plpgsql) is FASTER than in PHP >> >> But I can throw 1,000 cores at a large load with php. Much harder to >> do with plpgsql. > > and? > all of them would inevitably connect the same postgresql And they'd each need postgresql to do a concat? I'd hope nobody was dumb enough to program the app layer to do something like that. PG might make a decent app server, but there's no way you could scale it to millions of users like you could a farm of app servers.
Technically it can be done, but just because we can do something does not mean we should do something. Having said that... We have been using a middleware product that shall remain nameless, that goes against a large commercial database that shall also remain nameless. The middleware has been migrating to a more and more database based code set, and as an administrator of such a system I can state that this is awful. Getting appropriate logging out of the application logic for both auditing purposes and trouble shooting is near impossible. Performance is nearly impossible to tune as everything runs inside the database. One giant process chewing up cores of CPU power. Security is near impossible to manage as well. Again, almost everything needs to run as the same user. The database is now making calls to generate pdf objects and make printing calls. None of the traditional tools can be used to integrate the application into the enterprise. The load balancer needs to add x-forwarded headers to http requests, but the custom http code can't handle that, so all web access appears to come from the load balancer. This violates regulatory requirements. Log file formats are not standard since none of the code is standard, this means that none of the event correlation tools can be used for intrusion detection etc. It is just a nightmare. The previous version that had real middleware and real database servers was much better. The workloads were different so each server could be tuned for what it was doing. We were able to purchase hardware appropriate to the task. Big RAM for database, big CPU for middleware. Overall it was cheaper. Just my $0.02 Evan Scott Marlowe wrote: > 2011/8/16 sad@bestmx.ru <sad@bestmx.ru>: >> Scott Marlowe ÐÉÛÅÔ: >>> On Mon, Aug 15, 2011 at 11:33 AM, sad@bestmx.ru<sad@bestmx.ru> šwrote: >>>> Scott Marlowe ÐÉÛÅÔ: >>>>> On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> >>>>> šwrote: >>>>>> Dear Postgres users, >>>>>> from last few months I am reading and searching for can postgresql used >>>>>> as >>>>>> application server? As postgresql supports many languages like pl/perl, >>>>> Besides the previously mentioned nginx module there's apache's mod >>>>> libpq http://asmith.id.au/mod_libpq.html >>>>> >>>>> But I'd stick to a language to wrap stuff in like php etc. >>>> BTW, string concatenation in postgresql (plpgsql) is FASTER than in PHP >>> But I can throw 1,000 cores at a large load with php. šMuch harder to >>> do with plpgsql. >> and? >> all of them would inevitably connect the same postgresql > > And they'd each need postgresql to do a concat? I'd hope nobody was > dumb enough to program the app layer to do something like that. PG > might make a decent app server, but there's no way you could scale it > to millions of users like you could a farm of app servers. >
Evan Rempel пишет: > Security is near impossible to manage as well. Again, almost > everything needs to run as > the same user. throw your nameless DMS away, or fire the architect. >>> Scott Marlowe ÐÉÛÅÔ: >>>> On Mon, Aug 15, 2011 at 11:33 AM, sad@bestmx.ru<sad@bestmx.ru> šwrote: >>>>> Scott Marlowe ÐÉÛÅÔ: >>>>>> On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> >>>>>> šwrote: >>>>>>> Dear Postgres users, >>>>>>> from last few months I am reading and searching for can >>>>>>> postgresql used >>>>>>> as >>>>>>> application server? As postgresql supports many languages like >>>>>>> pl/perl, >>>>>> Besides the previously mentioned nginx module there's apache's mod >>>>>> libpq http://asmith.id.au/mod_libpq.html >>>>>> >>>>>> But I'd stick to a language to wrap stuff in like php etc. >>>>> BTW, string concatenation in postgresql (plpgsql) is FASTER than >>>>> in PHP >>>> But I can throw 1,000 cores at a large load with php. šMuch harder to >>>> do with plpgsql. >>> and? >>> all of them would inevitably connect the same postgresql >> >> And they'd each need postgresql to do a concat? I'd hope nobody was >> dumb enough to program the app layer to do something like that. PG >> might make a decent app server, but there's no way you could scale it >> to millions of users like you could a farm of app servers. >> > 2011/8/16 sad@bestmx.ru <sad@bestmx.ru>: > > -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
I can't let this slide :-D On Tue, Aug 16, 2011 at 9:27 AM, Evan Rempel <erempel@uvic.ca> wrote: > Technically it can be done, but just because we can do something does not > mean we should do something. Having said that... > > We have been using a middleware product that shall remain nameless, > that goes against a large commercial database that shall also remain > nameless. > The middleware has been migrating to a more and more database based code > set, and as an administrator of such a system I can state that this is > awful. From your description below, it truly sounds awful. However, this strikes me as being an issue of *which* logic is moved into the database instead of *that* logic is being moved into the database. > > Getting appropriate logging out of the application logic for both auditing > purposes > and trouble shooting is near impossible. Performance is nearly impossible to > tune as > everything runs inside the database. One giant process chewing up cores of > CPU power. LedgerSMB has been moving in the direction of more logic in the database because we have found the opposite. Troubleshooting is easier, code maintenance is easier, performance is easier to troubleshoot and manage, and security is far more manageable. Now, granted we are retrofitting security onto a codebase which had none when we forked, so that is a difference..... We have eliminated a much larger number of bottlenecks by going this way than we have run into. Again the issue is *which* logic goes into the db, and that's an absolutely key question when running middle tiers in the dbms. > > > Security is near impossible to manage as well. Again, almost everything > needs to run as > the same user. The database is now making calls to generate pdf objects and > make > printing calls. Ouch.... I guess I could kinda see the PDF generation calls (I'd still prefer a queue and transform approach), but not the printing calls. And if you can't move security into the database, then you have a kind of major problem: you aren't *really* generating a consistent and consistently enforced API in this way, and so you can't get to the roles a middleware solution gives you. As long as you still need the middleware, then the question really becomes, what logic needs to be centralized in the db and what logic is specific to each middleware application? What do you get from putting each thing in the database? My recommendation is to review that. If it is a single app db, then use stored procs as essentially named queries. If it is possible, move the printing calls into a separate process and have it signalled by the db app on database commit. But it really sounds like an unmaintainable mess. IME, however, that is avoidable while still placing the RDBMS in the center of the logic-complete application server environment. > > None of the traditional tools can be used to integrate the application into > the enterprise. > The load balancer needs to add x-forwarded headers to http requests, but the > custom http code can't handle that, so all web access appears to come from > the load > balancer. This violates regulatory requirements. Log file formats are not > standard > since none of the code is standard, this means that none of the event > correlation > tools can be used for intrusion detection etc. Ouch..... I second the suggestion that the architecture here lacks the separation of concerns approach necessary to make this work, and that either the software you are using is the problem or the architect is. However, it doesn't tell me that the approach of using the RDBMS as the entry point into an application server environment is necessarily a bad thing. Best Wishes, Chris Travers
On Tue, Aug 16, 2011 at 12:52 AM, Chris Travers <chris.travers@gmail.com> wrote: > On Mon, Aug 15, 2011 at 10:05 PM, Scott Marlowe <scott.marlowe@gmail.com> wrote: > >> >> Yep. Also, it's REAL easy to stick a caching layer like memcached >> into the middle tier app layer, but nearly impossible to do so in >> pgsql. For large systems, this would make pg as an app server a nogo. >> But for small to medium sized systems that don't need caching it >> could work out. >> > The other big scalability limitation for Pg as an app server is you > really can't do connection pooling. why not? if you are serving http, just put thin connection pooler in your http server (node.js would be great for that). if you are serving libpq directly, you can pool with pgbouncer. merlin
On Tue, Aug 16, 2011 at 11:27 AM, Evan Rempel <erempel@uvic.ca> wrote: > Technically it can be done, but just because we can do something does not > mean we should do something. Having said that... > > We have been using a middleware product that shall remain nameless, > that goes against a large commercial database that shall also remain > nameless. > The middleware has been migrating to a more and more database based code > set, and as an administrator of such a system I can state that this is > awful. > > Getting appropriate logging out of the application logic for both auditing > purposes > and trouble shooting is near impossible. Performance is nearly impossible to > tune as > everything runs inside the database. One giant process chewing up cores of > CPU power. > > > Security is near impossible to manage as well. Again, almost everything > needs to run as > the same user. The database is now making calls to generate pdf objects and > make > printing calls. > > None of the traditional tools can be used to integrate the application into > the enterprise. > The load balancer needs to add x-forwarded headers to http requests, but the > custom http code can't handle that, so all web access appears to come from > the load > balancer. This violates regulatory requirements. Log file formats are not > standard > since none of the code is standard, this means that none of the event > correlation > tools can be used for intrusion detection etc. > > It is just a nightmare. The previous version that had real middleware and > real database > servers was much better. The workloads were different so each server could > be tuned for > what it was doing. We were able to purchase hardware appropriate to the > task. Big RAM > for database, big CPU for middleware. Overall it was cheaper. very few of those complaints would apply to postgres -- the database does not run in a single process (thank goodness for that!), logging via the various pls is trivially done and effective, etc. also if you're serving http it does make sense to wrap postgres with a thin http server (lighttpd, node.js, nginx are all good choices). most of the rest of your gripes seem to apply more to the specific middleware implementation vs a general appraisal of the technique. it's trivial to implement database side security systems and many people do so. one point that is getting lost in all this that if you are using a database for an application server, this does not mean it's the same database as your main database or even on the same machine -- you can still keep traditional separation of roles and use something like dblink to transfer data. regarding the use of postgres functions for things like reports and pdf generation, I see absolutely nothing wrong with doing this...although plpgsql is not a very good choice...pl/python or pl/java would be a better way to go. merlin
On Tue, Aug 16, 2011 at 12:25 PM, Merlin Moncure <mmoncure@gmail.com> wrote: > one point that is getting lost in all this that if you are using a > database for an application server, this does not mean it's the same > database as your main database or even on the same machine -- you can > still keep traditional separation of roles and use something like > dblink to transfer data. regarding the use of postgres functions for I'm thining pl/proxy would be pretty amazing here. I still prefer using another layer in another language for an app server, but now that you've got me thinking a bit more out of the box, pl/proxy would increase your ability to scale quite a lot.
Chaitanya Kulkarni
very few of those complaints would apply to postgres -- the databaseOn Tue, Aug 16, 2011 at 11:27 AM, Evan Rempel <erempel@uvic.ca> wrote:
> Technically it can be done, but just because we can do something does not
> mean we should do something. Having said that...
>
> We have been using a middleware product that shall remain nameless,
> that goes against a large commercial database that shall also remain
> nameless.
> The middleware has been migrating to a more and more database based code
> set, and as an administrator of such a system I can state that this is
> awful.
>
> Getting appropriate logging out of the application logic for both auditing
> purposes
> and trouble shooting is near impossible. Performance is nearly impossible to
> tune as
> everything runs inside the database. One giant process chewing up cores of
> CPU power.
>
>
> Security is near impossible to manage as well. Again, almost everything
> needs to run as
> the same user. The database is now making calls to generate pdf objects and
> make
> printing calls.
>
> None of the traditional tools can be used to integrate the application into
> the enterprise.
> The load balancer needs to add x-forwarded headers to http requests, but the
> custom http code can't handle that, so all web access appears to come from
> the load
> balancer. This violates regulatory requirements. Log file formats are not
> standard
> since none of the code is standard, this means that none of the event
> correlation
> tools can be used for intrusion detection etc.
>
> It is just a nightmare. The previous version that had real middleware and
> real database
> servers was much better. The workloads were different so each server could
> be tuned for
> what it was doing. We were able to purchase hardware appropriate to the
> task. Big RAM
> for database, big CPU for middleware. Overall it was cheaper.
does not run in a single process (thank goodness for that!), logging
via the various pls is trivially done and effective, etc. also if
you're serving http it does make sense to wrap postgres with a thin
http server (lighttpd, node.js, nginx are all good choices). most of
the rest of your gripes seem to apply more to the specific middleware
implementation vs a general appraisal of the technique. it's trivial
to implement database side security systems and many people do so.
one point that is getting lost in all this that if you are using a
database for an application server, this does not mean it's the same
database as your main database or even on the same machine -- you can
still keep traditional separation of roles and use something like
dblink to transfer data. regarding the use of postgres functions for
things like reports and pdf generation, I see absolutely nothing wrong
with doing this...although plpgsql is not a very good
choice...pl/python or pl/java would be a better way to go.
merlin
--
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin
this whole discussion reminds me of the old adage... if your only tool is a hammer, every problem looks like a nail. -- john r pierce N 37, W 122 santa cruz ca mid-left coast
On Tue, Aug 16, 2011 at 11:08 AM, Merlin Moncure <mmoncure@gmail.com> wrote: > why not? if you are serving http, just put thin connection pooler in > your http server (node.js would be great for that). if you are > serving libpq directly, you can pool with pgbouncer. > Who enforces security and how? Best Wishes, Chris Travers
Evan Rempel пишет: > Security is near impossible to manage as well. Again, almost > everything needs to run as > the same user. throw your nameless DMS away, or fire the architect. >>> Scott Marlowe ÐÉÛÅÔ: >>>> On Mon, Aug 15, 2011 at 11:33 AM, sad@bestmx.ru<sad@bestmx.ru> šwrote: >>>>> Scott Marlowe ÐÉÛÅÔ: >>>>>> On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> >>>>>> šwrote: >>>>>>> Dear Postgres users, >>>>>>> from last few months I am reading and searching for can >>>>>>> postgresql used >>>>>>> as >>>>>>> application server? As postgresql supports many languages like >>>>>>> pl/perl, >>>>>> Besides the previously mentioned nginx module there's apache's mod >>>>>> libpq http://asmith.id.au/mod_libpq.html >>>>>> >>>>>> But I'd stick to a language to wrap stuff in like php etc. >>>>> BTW, string concatenation in postgresql (plpgsql) is FASTER than >>>>> in PHP >>>> But I can throw 1,000 cores at a large load with php. šMuch harder to >>>> do with plpgsql. >>> and? >>> all of them would inevitably connect the same postgresql >> >> And they'd each need postgresql to do a concat? I'd hope nobody was >> dumb enough to program the app layer to do something like that. PG >> might make a decent app server, but there's no way you could scale it >> to millions of users like you could a farm of app servers. >> > 2011/8/16 sad@bestmx.ru <sad@bestmx.ru>: > > -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Evan Rempel пишет: > Security is near impossible to manage as well. Again, almost > everything needs to run as > the same user. throw your nameless DMS away, or fire the architect. >>> Scott Marlowe ÐÉÛÅÔ: >>>> On Mon, Aug 15, 2011 at 11:33 AM, sad@bestmx.ru<sad@bestmx.ru> šwrote: >>>>> Scott Marlowe ÐÉÛÅÔ: >>>>>> On Sat, Aug 13, 2011 at 9:57 AM, c k<shreeseva.learning@gmail.com> >>>>>> šwrote: >>>>>>> Dear Postgres users, >>>>>>> from last few months I am reading and searching for can >>>>>>> postgresql used >>>>>>> as >>>>>>> application server? As postgresql supports many languages like >>>>>>> pl/perl, >>>>>> Besides the previously mentioned nginx module there's apache's mod >>>>>> libpq http://asmith.id.au/mod_libpq.html >>>>>> >>>>>> But I'd stick to a language to wrap stuff in like php etc. >>>>> BTW, string concatenation in postgresql (plpgsql) is FASTER than >>>>> in PHP >>>> But I can throw 1,000 cores at a large load with php. šMuch harder to >>>> do with plpgsql. >>> and? >>> all of them would inevitably connect the same postgresql >> >> And they'd each need postgresql to do a concat? I'd hope nobody was >> dumb enough to program the app layer to do something like that. PG >> might make a decent app server, but there's no way you could scale it >> to millions of users like you could a farm of app servers. >> > 2011/8/16 sad@bestmx.ru <sad@bestmx.ru>: > > -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
On Tue, Aug 16, 2011 at 1:47 PM, Chris Travers <chris.travers@gmail.com> wrote: > On Tue, Aug 16, 2011 at 11:08 AM, Merlin Moncure <mmoncure@gmail.com> wrote: > >> why not? if you are serving http, just put thin connection pooler in >> your http server (node.js would be great for that). if you are >> serving libpq directly, you can pool with pgbouncer. >> > Who enforces security and how? *) http wrapper (example node.js): check security in the wrapper. presumably your application server would be keeping sessions state independently of database session and would do verification on every call. *) stock pgbouncer: there is essentially no strong way of checking security. what we ended up doing was modifying pgbouncer to keep track of the client auth and building a query whitelist. very simple and effective. we also added in support for listen/notify. imagine interacting directly with remote agents inside the psql console and being able to join client provided data to other tables in the database :-). ad hoc sql obviously can't be allowed from an untrusted source. merlin
On Aug 16, 2011, at 11:42 AM, John R Pierce wrote: > > this whole discussion reminds me of the old adage... > > if your only tool is a hammer, every problem looks like a nail. I'm amazed nobody has mentioned http://www.sqlonrails.org/ yet. Cheers, Steve
On Tue, Aug 16, 2011 at 12:31 PM, Merlin Moncure <mmoncure@gmail.com> wrote: >>> >> Who enforces security and how? > > *) http wrapper (example node.js): check security in the wrapper. > presumably your application server would be keeping sessions state > independently of database session and would do verification on every > call. But here then you hare having to re-implement the whole security system yourself. Does this give you a net benefit in complexity and performance over a standard middleware solution? But moreover for this to work you have to include security information in meaningful tuples handed to the database, right? I don't like this. It sounds like a lot of complexity where a middleware solution for managing security would be far better. > *) stock pgbouncer: there is essentially no strong way of checking > security. what we ended up doing was modifying pgbouncer to keep track > of the client auth and building a query whitelist. very simple and > effective. we also added in support for listen/notify. imagine > interacting directly with remote agents inside the psql console and > being able to join client provided data to other tables in the > database :-). ad hoc sql obviously can't be allowed from an untrusted > source. Ok, so here you are extending the connection pooler itself to provide traditional middleware functions rather than implementing them in the database itself, right? So this important app server function is not implemented in the database. The thing is that for small to midsize businesses, I think Pg makes a great centerpiece for an app server environment. In the areas I am used to working in, even most larger businesses would be unlikely to need connection pooling. It seems that if you get to the point where you need to use connection pooling you have to do what you are doing and implement your security no further back than the connection pooler. This makes the connection pooler, not PostgreSQL, the entry point for the application server environment. It doesn't mean that Pg isn't taking on some of the load that middleware normally does at that point but it is no longer capable of *being* the middleware. Best Wishes, Chris Travers
On Tue, Aug 16, 2011 at 4:04 PM, Chris Travers <chris.travers@gmail.com> wrote: > On Tue, Aug 16, 2011 at 12:31 PM, Merlin Moncure <mmoncure@gmail.com> wrote: > >>>> >>> Who enforces security and how? >> >> *) http wrapper (example node.js): check security in the wrapper. >> presumably your application server would be keeping sessions state >> independently of database session and would do verification on every >> call. > > But here then you hare having to re-implement the whole security > system yourself. Does this give you a net benefit in complexity and > performance over a standard middleware solution? But moreover for > this to work you have to include security information in meaningful > tuples handed to the database, right? I don't like this. It sounds > like a lot of complexity where a middleware solution for managing > security would be far better. /shrug. pretty much every project I've ever worked on application security has been ad hoc, database driven, not very complicated, and not a performance bottleneck. By the way, I think the opposite of you: security information relating to application roles and actions *should* be stored in the database (it is, after all, data) even if it is enforced by a classic middleware. What happens when some other application, written by another team, connects to the database? Saying 'that shouldn't be allowed to happen' is simply not the reality in many enterprise environments. I don't see what's so complicated about storing who a person is and what they are allowed to do, and checking the permission just before that 'what' is about to get done. >> *) stock pgbouncer: there is essentially no strong way of checking >> security. what we ended up doing was modifying pgbouncer to keep track >> of the client auth and building a query whitelist. very simple and >> effective. we also added in support for listen/notify. imagine >> interacting directly with remote agents inside the psql console and >> being able to join client provided data to other tables in the >> database :-). ad hoc sql obviously can't be allowed from an untrusted >> source. > > Ok, so here you are extending the connection pooler itself to provide > traditional middleware functions rather than implementing them in the > database itself, right? So this important app server function is not > implemented in the database. well, not exactly. it is a concession to security. allowing untrusted entities to send ad hoc sql to a database is obviously not going to fly so it must be dealt with appropriately. note pgbouncer (or node.js etc) is not defining or handling session auth, just playing a small role enforcement. an auth'd application service requests are essentially protocol noise and I see no problem letting the protocol handler bounce them out. also, whatever you happen to wrap your 'middleware' database is still part of the middleware. also I think most people would not go the libpq route even though listen/notify allows you to create beautifully interactive systems -- mainly due to weirdness of the whole thing and the amount of work you have to do to get it safe. > The thing is that for small to midsize businesses, I think Pg makes a > great centerpiece for an app server environment. In the areas I am > used to working in, even most larger businesses would be unlikely to > need connection pooling. It seems that if you get to the point where > you need to use connection pooling you have to do what you are doing > and implement your security no further back than the connection > pooler. This makes the connection pooler, not PostgreSQL, the entry > point for the application server environment. It doesn't mean that Pg > isn't taking on some of the load that middleware normally does at that > point but it is no longer capable of *being* the middleware. sure. that is a distinction I guess you could make. but the real point is all the zillions of LOC that just stupidly bounce data around -- kill em with fire, i say :-D. merlin
On Tue, Aug 16, 2011 at 3:51 PM, Merlin Moncure <mmoncure@gmail.com> wrote: > > /shrug. pretty much every project I've ever worked on application > security has been ad hoc, database driven, not very complicated, and > not a performance bottleneck. By the way, I think the opposite of > you: security information relating to application roles and actions > *should* be stored in the database (it is, after all, data) even if it > is enforced by a classic middleware. What happens when some other > application, written by another team, connects to the database? Not understanding my perspective. Ideally you'd use the RDBMS's functionality directly to enforce security via GRANT and REVOKE statements. Whether it is stored in the database or not is for the RDBMS to decide. What I am saying is that the further back you enforce the security the more you can guarantee consistent enforcement across applications. Connection pooling makes this much harder because you can't enforce it within the db using the normal methods and end up having to implement it all over. Instead you have to implement security before the data hits the database. That's a big difference and it has HUGE ramifications for security exposure vs utility of an application. > Saying 'that shouldn't be allowed to happen' is simply not the reality > in many enterprise environments. I don't see what's so complicated > about storing who a person is and what they are allowed to do, and > checking the permission just before that 'what' is about to get done. No, I think the opposite. If the RDBMS enforces security then any client can connect and you don't have to worry about this problem at all. However if you push this into the RDBMS and use native facilities, then you can't use connection pooling. > > > well, not exactly. it is a concession to security. allowing > untrusted entities to send ad hoc sql to a database is obviously not > going to fly so it must be dealt with appropriately. note pgbouncer > (or node.js etc) is not defining or handling session auth, just > playing a small role enforcement. an auth'd application service > requests are essentially protocol noise and I see no problem letting > the protocol handler bounce them out. also, whatever you happen to > wrap your 'middleware' database is still part of the middleware. Well, what you are actually doing here is enforcing security on a level of abstraction away from the database. This means that you can't allow ad hoc queries because you can't guarantee safety. I don't know what you get by doing this instead of providing interface-level security in the part of your middleware. In fact that's essentially what you have to do, is it not? > > also I think most people would not go the libpq route even though > listen/notify allows you to create beautifully interactive systems -- > mainly due to weirdness of the whole thing and the amount of work you > have to do to get it safe. Why? Each listener has to have its own connection, right? Otherwise there is nowhere to send the notifications to. That connection has to be attached to a db role. The DB role has to have permission to some portions of the database but not others, right? I don't see why that is hard to make safe. > > sure. that is a distinction I guess you could make. but the real > point is all the zillions of LOC that just stupidly bounce data around > -- kill em with fire, i say :-D. > We can agree with that part. Best Wishes, Chris Travers
Web server must be external but most of the application functionality can be implemented using database stored procs and functions.
Chaitanya Kulkarni
On Tue, Aug 16, 2011 at 3:51 PM, Merlin Moncure <mmoncure@gmail.com> wrote:Not understanding my perspective. Ideally you'd use the RDBMS's
>
> /shrug. pretty much every project I've ever worked on application
> security has been ad hoc, database driven, not very complicated, and
> not a performance bottleneck. By the way, I think the opposite of
> you: security information relating to application roles and actions
> *should* be stored in the database (it is, after all, data) even if it
> is enforced by a classic middleware. What happens when some other
> application, written by another team, connects to the database?
functionality directly to enforce security via GRANT and REVOKE
statements. Whether it is stored in the database or not is for the
RDBMS to decide.
What I am saying is that the further back you enforce the security the
more you can guarantee consistent enforcement across applications.
Connection pooling makes this much harder because you can't enforce it
within the db using the normal methods and end up having to implement
it all over. Instead you have to implement security before the data
hits the database. That's a big difference and it has HUGE
ramifications for security exposure vs utility of an application.No, I think the opposite. If the RDBMS enforces security then any
> Saying 'that shouldn't be allowed to happen' is simply not the reality
> in many enterprise environments. I don't see what's so complicated
> about storing who a person is and what they are allowed to do, and
> checking the permission just before that 'what' is about to get done.
client can connect and you don't have to worry about this problem at
all. However if you push this into the RDBMS and use native
facilities, then you can't use connection pooling.>Well, what you are actually doing here is enforcing security on a
>
> well, not exactly. it is a concession to security. allowing
> untrusted entities to send ad hoc sql to a database is obviously not
> going to fly so it must be dealt with appropriately. note pgbouncer
> (or node.js etc) is not defining or handling session auth, just
> playing a small role enforcement. an auth'd application service
> requests are essentially protocol noise and I see no problem letting
> the protocol handler bounce them out. also, whatever you happen to
> wrap your 'middleware' database is still part of the middleware.
level of abstraction away from the database. This means that you
can't allow ad hoc queries because you can't guarantee safety. I
don't know what you get by doing this instead of providing
interface-level security in the part of your middleware. In fact
that's essentially what you have to do, is it not?>Why? Each listener has to have its own connection, right? Otherwise
> also I think most people would not go the libpq route even though
> listen/notify allows you to create beautifully interactive systems --
> mainly due to weirdness of the whole thing and the amount of work you
> have to do to get it safe.
there is nowhere to send the notifications to. That connection has to
be attached to a db role. The DB role has to have permission to some
portions of the database but not others, right? I don't see why that
is hard to make safe.We can agree with that part.
>
> sure. that is a distinction I guess you could make. but the real
> point is all the zillions of LOC that just stupidly bounce data around
> -- kill em with fire, i say :-D.
>
Best Wishes,
Chris Travers
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 08/16/2011 03:06 AM, Craig Ringer wrote: > On 15/08/2011 10:36 PM, Merlin Moncure wrote: >> On Sat, Aug 13, 2011 at 2:30 PM, Andreas Joseph Krogh >> <andreak@officenet.no> wrote: >>> No, PG has never, and will never, act as an application-server. >> Why in the world not? > > The biggest reason is safety. Beyond that, the lack of autonomous > transactions, stored procedures, in-DB timers, and support for any > protocol other than the native Pg database query protocol mean it's also > rather impractical. > > I guess theoretically one could embed a JVM / Python instance / whatever > in the postmaster and have it spawn new backends for incoming > connections with other protocols. But ... why? Why add all that > complexity and - more importantly - contaiminate PostgreSQL's address > space with more code that can fail when you don't have to? PLs and user > C procedures are already risk enough as far as I'm concerned. > As I've mentioned in other posts, we are using postgresql as a hybrid application server and have found it to be very practical and easy to code. For functions that need to run on a regular basis, we use a cron job, though from what I understand pgAgent should be able to handle that just as well. Instead of using Listen/Notify to tell an application to send e-mails, we have an email function written in plpython that sends the email directly. The reason to do this is because it gives you "write once" code which makes your application pretty much client-agnostic. PG already spawns a new backend for each connection. So your specialized code has little chance of hosing the whole database. We connect to other databases when we need to, mostly mysql using the MySQLdb python module, to move data to and from our website. Scalability would be increased with multi-master replication, which bucardo currently does (I haven't tested it yet). One issue you have is using database style hardware for application server needs, which may be more expensive.
On 08/15/2011 11:50 PM, Chris Travers wrote: > On Mon, Aug 15, 2011 at 1:44 PM, Darren Duncan<darren@darrenduncan.net> wrote: > >> I believe that it is ideal for Postgres to be computationally complete in >> that one *could* use it to implement a complete application. That isn't to >> say one should do this as a matter of course, good to use appropriate tools >> for a job, but that it should at least be possible if one wanted to. -- > > I think the limit is actually transactional control. > > So, suppose I manage inventory.... > > I want an email to go out to the ordering manager when the quantity I > have of an item drops below the re-order point. I also want this > email NOT to go out if the transaction rolls back. (Wait, the order > of 50000 widgets I just processed rolled back because it isn't to a > valid customer! We normally only sell 50000 per year anyway. No need > for the email.) > > 1) I don't see how this is possible directly from within PostgreSQL > 2) Given the obvious ways around this, I don't see why this is > desirable to add to PostgreSQL..... > > Best Wishes, > Chris Travers > We are doing this same sort of thing now. If the transaction goes through, the email record gets written to a table. We have a cron job that calls a database function that processes all emails that have not been processed yet. If the transaction gets rolled back, the email record does not get written to the table and the email does not get sent. In your scenario, if you send the NOTIFY message and then you roll back the transaction, the helper application will still send the email. IOW, doing this outside of the database can more easily break your transactional integrity. Sim
On 08/16/2011 07:04 AM, Darren Duncan wrote: > Chris Travers wrote: >> On Mon, Aug 15, 2011 at 3:47 PM, Darren Duncan >> <darren@darrenduncan.net> wrote: >>> I believe we basically have all the foundation already, with maybe >>> procedures executable outside transactions being the last major part. >>> >> Why is this desirable? Why is it more desirable than actually using >> the listen/notify infrastructure that exists already? > > Maybe listen/notify is sufficient by itself. I withdraw my "procedures > executable outside transactions" comment for now, and just bring it up > later if I can think of a specific use case that other mechanisms don't > satisfy. -- Darren Duncan > LISTEN/NOTIFY is good if you want to call an outside application, however, if you want to call another database function, then it is just plain wrong. I need a daemon that will listen to for notify calls so that it can open a new database session and call the function. With an asynchronous function, my function would be able to spawn a new session and run in its own transaction while the calling function would be able to complete and go away. See http://archives.postgresql.org/pgsql-hackers/2011-04/msg01503.php for my proposal on this issue. One problem we have with LISTEN/NOTIFY (and I haven't found the cause for this yet) is every once in a while my daemon stops listening. It may be after a month of use or longer, and may be caused by the database being restarted or something similar. When the daemon stops listening, it doesn't give any errors or indication that it isn't working anymore. It is only after a user complains that something hasn't been updated in a day or so that we manually restart it. It doesn't happen very often, but it does happen on occasion.
On Aug 17, 2011, at 12:53 AM, Sim Zacks wrote: > In your scenario, if you send the NOTIFY message and then you roll back the transaction, the helper application will stillsend the email. How? NOTIFY doesn't get delivered until the transaction commits. -- Scott Ribe scott_ribe@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice
On Aug 17, 2011, at 1:05 AM, Sim Zacks wrote: > One problem we have with LISTEN/NOTIFY (and I haven't found the cause for this yet) is every once in a while my daemonstops listening. It may be after a month of use or longer, and may be caused by the database being restarted or somethingsimilar. When the daemon stops listening, it doesn't give any errors or indication that it isn't working anymore. So your daemon has a bug. When the database is restarted, connections will be closed, and the daemon should certainly noticethat. Of course the cause may be something else, but either way I doubt it's a problem with NOTIFY/LISTEN. -- Scott Ribe scott_ribe@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice
On Tue, Aug 16, 2011 at 6:14 PM, Chris Travers <chris.travers@gmail.com> wrote: > On Tue, Aug 16, 2011 at 3:51 PM, Merlin Moncure <mmoncure@gmail.com> wrote: > >> >> /shrug. pretty much every project I've ever worked on application >> security has been ad hoc, database driven, not very complicated, and >> not a performance bottleneck. By the way, I think the opposite of >> you: security information relating to application roles and actions >> *should* be stored in the database (it is, after all, data) even if it >> is enforced by a classic middleware. What happens when some other >> application, written by another team, connects to the database? > > Not understanding my perspective. Ideally you'd use the RDBMS's > functionality directly to enforce security via GRANT and REVOKE > statements. Whether it is stored in the database or not is for the > RDBMS to decide. GRANT/REVOKE only constrain read/write privileges to a database. Application level security is typically much finer grained than that. Also, I using SQL roles for actual user roles is not typically done for various reasons. Generally, SQL roles are used to define 'what' is logging in, no necessarily 'who'. If you allow and SQL through from the application then table level security becomes very important...otherwise not so much. > What I am saying is that the further back you enforce the security the > more you can guarantee consistent enforcement across applications. > Connection pooling makes this much harder because you can't enforce it > within the db using the normal methods and end up having to implement > it all over. Instead you have to implement security before the data > hits the database. That's a big difference and it has HUGE > ramifications for security exposure vs utility of an application. That is totally incorrect. pgbouncer maintains separate pools for each role and only intermingles queries for like roles. Any intelligent connection pooler would do the same. You lose access to database session features but database level security features are still enforced. Whether you connection pool or not really doesn't play into this from my point of view. Recall that in our hypothetical 'database as middleware' database, the main tables and there data are not actually in the database -- the only tables available to query would be session state, etc. Most operations would funnel through back to the main database through procedures and application level security would be checked there. Now, if you want your system to be simple, tight, and fast, you could combine those two databases but would have to figure out how to manage security to a libpq speaking application. Like I said, in my case I did this with a whitelist, but it's not the only way. >> well, not exactly. it is a concession to security. allowing >> untrusted entities to send ad hoc sql to a database is obviously not >> going to fly so it must be dealt with appropriately. note pgbouncer >> (or node.js etc) is not defining or handling session auth, just >> playing a small role enforcement. an auth'd application service >> requests are essentially protocol noise and I see no problem letting >> the protocol handler bounce them out. also, whatever you happen to >> wrap your 'middleware' database is still part of the middleware. > > Well, what you are actually doing here is enforcing security on a > level of abstraction away from the database. This means that you > can't allow ad hoc queries because you can't guarantee safety. I > don't know what you get by doing this instead of providing > interface-level security in the part of your middleware. In fact > that's essentially what you have to do, is it not? If you expose (as I did) your middleware api as a 100% sql function interface, then yes ad hoc sql is not allowed. If you wrap your middleware with a http server than ad hoc sql would not be allowed. I doubt the day will come where the browser will be sending ad hoc SQL queries directly to a database. The reason to use a database backend to handle middleware functions is based on the strength of the SQL language, supported by the various PL extensions you may want to include, to manage various everyday programming tasks. The security discussion is a bit of a sideshow because it is generally a tiny fraction of the coding that typically happens at this level. An individual's personal appraisal of this idea will largely depend on certain personal factors that will vary from developer to developer. An unbiased analysis would probably conclude that it is an interesting, but unproven approach with a lot of potential. >> also I think most people would not go the libpq route even though >> listen/notify allows you to create beautifully interactive systems -- >> mainly due to weirdness of the whole thing and the amount of work you >> have to do to get it safe. > > Why? Each listener has to have its own connection, right? Otherwise > there is nowhere to send the notifications to. That connection has to > be attached to a db role. The DB role has to have permission to some > portions of the database but not others, right? I don't see why that > is hard to make safe. It's hard to make safe because allowing applications to speak libpq means you have to be worried about various nasty things untrusted SQL can do to your database server. In the end, I think the only reasonable way to do this is a whitelist. merlin
On Tue, Aug 16, 2011 at 11:53 PM, Sim Zacks <sim@compulab.co.il> wrote: > We are doing this same sort of thing now. If the transaction goes through, > the email record gets written to a table. We have a cron job that calls a > database function that processes all emails that have not been processed > yet. If the transaction gets rolled back, the email record does not get > written to the table and the email does not get sent. > In your scenario, if you send the NOTIFY message and then you roll back the > transaction, the helper application will still send the email. IOW, doing > this outside of the database can more easily break your transactional > integrity. > Notify hits on commit, right? Best Wishes, Chris Travers
On Wed, Aug 17, 2011 at 7:40 AM, Merlin Moncure <mmoncure@gmail.com> wrote: > > GRANT/REVOKE only constrain read/write privileges to a database. > Application level security is typically much finer grained than that. > Also, I using SQL roles for actual user roles is not typically done > for various reasons. Generally, SQL roles are used to define 'what' > is logging in, no necessarily 'who'. If you allow and SQL through > from the application then table level security becomes very > important...otherwise not so much. You can use roles to affect things on a fairly granular level, if you combine a relation interface with a functional one. And the fact most people use SQL roles this way is due to enforcing security in the middleware. The disadvantage is that the database has to trust the middleware and the other clients connecting to it. To some extent this is unavoidable, but in general, reducing the level of trust between the components reduces the security exposure. Obviously this has some issues in terms of how far it can scale. One of the things we decided to do with LedgerSMB was to make every application user a database user and then enforce security on the back end. > >> What I am saying is that the further back you enforce the security the >> more you can guarantee consistent enforcement across applications. >> Connection pooling makes this much harder because you can't enforce it >> within the db using the normal methods and end up having to implement >> it all over. Instead you have to implement security before the data >> hits the database. That's a big difference and it has HUGE >> ramifications for security exposure vs utility of an application. > > That is totally incorrect. pgbouncer maintains separate pools for > each role and only intermingles queries for like roles. Any > intelligent connection pooler would do the same. You lose access to > database session features but database level security features are > still enforced. Whether you connection pool or not really doesn't > play into this from my point of view. Right, but then you still can't enforce *user* permissions on the database because there isn't a point in having a connection pool if each user gets one as a db user, is there? > > Recall that in our hypothetical 'database as middleware' database, the > main tables and there data are not actually in the database -- the > only tables available to query would be session state, etc. Most > operations would funnel through back to the main database through > procedures and application level security would be checked there. > Now, if you want your system to be simple, tight, and fast, you could > combine those two databases but would have to figure out how to manage > security to a libpq speaking application. Like I said, in my case I > did this with a whitelist, but it's not the only way. I guess I am approaching it differently as looking at logical tiers getting incorporated into the RDBMS, which becomes the centerpiece of and entrance point to the application server environment. That's why I am talking about the database taking over traditional middleware functions rather than having a separate database...... > >>> well, not exactly. it is a concession to security. allowing > If you expose (as I did) your middleware api as a 100% sql function > interface, then yes ad hoc sql is not allowed. If you wrap your > middleware with a http server than ad hoc sql would not be allowed. I > doubt the day will come where the browser will be sending ad hoc SQL > queries directly to a database. One of my LedgerSMB customers decided they wanted to be able to distribute SQL scripts to bookkeepers and have them run them via pgAdmin. So from the browser? No. From other software clients? Quite possibly. What we were able to do was assign the specifically needed functionality to the pgAdmin users and thus ensure that security and data integrity were not compromised by this approach. Now, the users in this case require a lot of read access, with a few very specific write permissions. The security footprint here is very low. We couldn't have accommodated that request safely, however, if our permissions system wasn't geared around the db enforcing permissions per user. > > The reason to use a database backend to handle middleware functions is > based on the strength of the SQL language, supported by the various PL > extensions you may want to include, to manage various everyday > programming tasks. The security discussion is a bit of a sideshow > because it is generally a tiny fraction of the coding that typically > happens at this level. An individual's personal appraisal of this > idea will largely depend on certain personal factors that will vary > from developer to developer. An unbiased analysis would probably > conclude that it is an interesting, but unproven approach with a lot > of potential. It's a tiny piece of the code, but it's a critical one, and when something goes wrong..... Here's my approach generally. 1) client apps (middleware and web apps become client apps in this approach) log in to the db with user-supplied credentials. The database enforces permissions on a table, view, and function level. Some operations are controlled at the table level. Some operations on a function level. As we start requiring newer versions of PostgreSQL, it is more and more likely that function-level permissions will go away. Permissions are granted to roles which are granted to users. 2) All new code in LedgerSMB moves through a discoverable stored procedure interface, so the actual web app is a thin Perl glue between a user interface template and stored procedures. The interface template can be designed around stored procedure inputs and outputs, and the glue itself has minimal knowledge of these things. This has some costs in terms of documentation but the SQL API is self-documenting to the extent possible. 3) PostgreSQL in this structure mostly accepts function calls and retrieves/stores data, always returning a meaningful tuple to the extent possible. So if we select * from post_transaction(.....) we would get the header information (including the id from the transaction) back to the application. 4) In some cases data may be queued in queue tables for further processing. NOTIFY is raised on commit (via triggers) to listening processes, which can then process the data and further store it somewhere as needed. Currently there are no uses of this in LedgerSMB out of the box, typically because this addresses needs of specific customers and not the general application. There are of course limits to all of these approaches. If I were to set up a shopping cart against such a database, I would probably use one database role for all customers and enforce authentication on the web server. Authorization would still be via the db though. > >>> also I think most people would not go the libpq route even though >>> listen/notify allows you to create beautifully interactive systems -- >>> mainly due to weirdness of the whole thing and the amount of work you >>> have to do to get it safe. >> >> Why? Each listener has to have its own connection, right? Otherwise >> there is nowhere to send the notifications to. That connection has to >> be attached to a db role. The DB role has to have permission to some >> portions of the database but not others, right? I don't see why that >> is hard to make safe. > > It's hard to make safe because allowing applications to speak libpq > means you have to be worried about various nasty things untrusted SQL > can do to your database server. In the end, I think the only > reasonable way to do this is a whitelist. Two points: 1) yes, coming up with proper permissions for a reasonably complex database is a lot of work. No question about it. But if you enforce these by the back-end, then allowing ad-hoc SQL isn't the end of the world. For example, suppose we have two tables: journal (the accounting general journal header info per entry) and journal_line (line items). We might want to require that all transactions that get stored are balanced, and that is something a relational interface is not good at. So we might allow select privileges to both these tables based on role, but only allow insert to the table owner, with access to a security definer function to store transactions. One could further allow update on columns indicating transaction status to some roles..... These roles could all be granted to relevant users. 2) Even if the above isn't the direction one wants to go, your objection doesn't address the use case I am addressing which are automated processes which monitor data queues and process data when transactions commit data to those queues. In those cases, you are talking about applications which need remarkably limited access to the database in the vast majority of cases, so creating a role with appropriate access for those applications doesn't unduly increase the security profile of the application. I don't see why these processes at any rate would go through a connection pooler since, well, they need a constantly open connection. Best Wishes, Chris Travers
On 08/17/11 7:40 AM, Merlin Moncure wrote: > GRANT/REVOKE only constrain read/write privileges to a database. at a table level, and even distinguishing between INSERT (writing new data) and UPDATING (updating existing data). you can get even finer granularity, using functions with SECURITY_DEFINER based permissions. -- john r pierce N 37, W 122 santa cruz ca mid-left coast
On 18/08/2011 12:35 AM, John R Pierce wrote: > On 08/17/11 7:40 AM, Merlin Moncure wrote: >> GRANT/REVOKE only constrain read/write privileges to a database. > > at a table level, and even distinguishing between INSERT (writing new > data) and UPDATING (updating existing data). Column level, actually :-) and they can control read access as well as write access. Further control for writes can be applied using triggers that RAISE EXCEPTION when they don't like something. -- Craig Ringer
On 08/17/2011 06:13 PM, Chris Travers wrote: <blockquote cite="mid:CAKt_ZftHwqGrYDhmMy3whik=5oqH7D5H1tQFGQrTJS98_7bMtA@mail.gmail.com"type="cite"><pre wrap="">On Tue, Aug 16, 2011at 11:53 PM, Sim Zacks <a class="moz-txt-link-rfc2396E" href="mailto:sim@compulab.co.il"><sim@compulab.co.il></a>wrote: </pre><blockquote type="cite"><pre wrap="">We are doing this same sort of thing now. If the transaction goes through, the email record gets written to a table. We have a cron job that calls a database function that processes all emails that have not been processed yet. If the transaction gets rolled back, the email record does not get written to the table and the email does not get sent. In your scenario, if you send the NOTIFY message and then you roll back the transaction, the helper application will still send the email. IOW, doing this outside of the database can more easily break your transactional integrity. </pre></blockquote><pre wrap="">Notify hits on commit, right? Best Wishes, Chris Travers </pre></blockquote><p>My bad. I just tested this. Notify doesn't get send until after commit.<br />
On 08/17/2011 05:34 PM, Scott Ribe wrote: <blockquote cite="mid:46EC79AF-3685-484E-AABE-E25969C23D56@elevated-dev.com" type="cite"><prewrap="">On Aug 17, 2011, at 1:05 AM, Sim Zacks wrote: </pre><blockquote type="cite"><pre wrap="">One problem we have with LISTEN/NOTIFY (and I haven't found the cause for thisyet) is every once in a while my daemon stops listening. It may be after a month of use or longer, and may be causedby the database being restarted or something similar. When the daemon stops listening, it doesn't give any errors orindication that it isn't working anymore. </pre></blockquote><pre wrap=""> So your daemon has a bug. When the database is restarted, connections will be closed, and the daemon should certainly noticethat. Of course the cause may be something else, but either way I doubt it's a problem with NOTIFY/LISTEN. </pre></blockquote><p>The point was not whether I have a bug in an external application, the point is that I need an externalapplication which creates more overhead and another point of failure in the application stack. <br />
On Wed, Aug 17, 2011 at 9:38 PM, Sim Zacks <sim@compulab.co.il> wrote: > The point was not whether I have a bug in an external application, the point > is that I need an external application which creates more overhead and > another point of failure in the application stack. > 1) Not sure how an external python script is different from a PL/Python sproc except that the former exists external to transaction control. 2) there is absolutely no reason you can't build redundancy into this system. 3) The overhead really shouldn't be bad, and if your parts are well-modularized, and carefully designed overhead really should be minimal. I don;t see what you gain from using cron that you don't gain from using a persistent process and notify... you could even have a cron script to check if it is running and start if not from time to time. Best Wishes, Chris Travers
Chris Travers <chris.travers@gmail.com> writes: > I want an email to go out to the ordering manager when the quantity I > have of an item drops below the re-order point. I also want this > email NOT to go out if the transaction rolls back. (Wait, the order > of 50000 widgets I just processed rolled back because it isn't to a > valid customer! We normally only sell 50000 per year anyway. No need > for the email.) Just use PGQ and be done with it. You have transactional and asynchronous behavior. Typically, a trigger would produce events in the queue, and a separate daemon will consume the queue and send emails. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
c k <shreeseva.learning@gmail.com> writes: > Many users are using it and found it stable and scalable. Important is that > web server is external to the database and a mod_pgsql like mod_plsql is > used to connect web server to database. Each page is considered as a stored > procedure in the oracle database. I am not thinking of implementing as it is It's been around for a long time already: http://asmith.id.au/mod_libpq.html Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
<meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> body p { margin-bottom: 0cm; margin-top: 0pt; } <body style="direction: ltr;" bidimailui-detected-decoding-type="latin-charset" bgcolor="#ffffff" text="#000000"> On 08/18/2011 07:57 AM, Chris Travers wrote: <blockquote cite="mid:CAKt_Zfs7392g+xyS-6_RBkFsmmUooftL=SZOefONH9=d8Ef+BA@mail.gmail.com" type="cite"> On Wed, Aug 17, 2011 at 9:38 PM, Sim Zacks <sim@compulab.co.il> wrote: The point was not whether I have a bug in an external application, the point is that I need an external application which creates more overhead and another point of failure in the application stack. 1) Not sure how an external python script is different from a PL/Python sproc except that the former exists external to transaction control. There are many differences. 1) If I have a database function and I copy my database to another server, the function still works. If I have an external daemon application, I not only have to copy my database, I also have to copy the daemon application. Then I have to build an init script and make sure it runs at startup. My LISTEN/NOTIFY daemon is a c application, so when I move my database to a server on a different platform, I have to recompile it. <blockquote cite="mid:CAKt_Zfs7392g+xyS-6_RBkFsmmUooftL=SZOefONH9=d8Ef+BA@mail.gmail.com" type="cite"> 2) there is absolutely no reason you can't build redundancy into this system. Its not a question of whether I can or cannot build redundancy, it is a question of whether I have to build an entire system in order to call a database function from another database function. The only reason this is complicated is because it needs to be in its own session. That simple issue shouldn't force me to build: a) a daemon application, b) include redundancy to ensure that it is running, c) not be included in my database backup/restore. Remember, I don't want to build a _system_, I basically want an asynchronous trigger. On specific event call a database function in its own transaction space and allow the existing transaction to end. <blockquote cite="mid:CAKt_Zfs7392g+xyS-6_RBkFsmmUooftL=SZOefONH9=d8Ef+BA@mail.gmail.com" type="cite"> 3) The overhead really shouldn't be bad, and if your parts are well-modularized, and carefully designed overhead really should be minimal. Any overhead that is not necessary should not be added in. It is the minor level of frustration that something didn't work when I migrated servers until the "Oh Yeah" kicked in. Then looking through all my notes to find the compilation instructions for my daemon because we moved from a 32 bit server to a 64 bit. Then trying to figure out the syntax for the init script, because we moved from Gentoo to Debian and it is slightly different. It isn't a lot of overhead but it is completely unneccessary in our situation. I will agree that this is entirely necessary if your application actually uses an external system and the database communicates through Listen/Notify. You have 2 systems to deal with in any case, but for me the only external component is having the daemon listen so it can call another function in the database. IOW, I don't generally deal with anything else on the server. <blockquote cite="mid:CAKt_Zfs7392g+xyS-6_RBkFsmmUooftL=SZOefONH9=d8Ef+BA@mail.gmail.com" type="cite"> Best Wishes, Chris Travers Sim
On Thu, Aug 18, 2011 at 3:40 AM, Dimitri Fontaine <dimitri@2ndquadrant.fr> wrote: > Chris Travers <chris.travers@gmail.com> writes: >> I want an email to go out to the ordering manager when the quantity I >> have of an item drops below the re-order point. I also want this >> email NOT to go out if the transaction rolls back. (Wait, the order >> of 50000 widgets I just processed rolled back because it isn't to a >> valid customer! We normally only sell 50000 per year anyway. No need >> for the email.) > > Just use PGQ and be done with it. You have transactional and > asynchronous behavior. Typically, a trigger would produce events in the > queue, and a separate daemon will consume the queue and send emails. > That actually looks quite helpful. Thanks. Best Wishes, Chris Travers
On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine <dimitri@2ndquadrant.fr> wrote: > c k <shreeseva.learning@gmail.com> writes: >> Many users are using it and found it stable and scalable. Important is that >> web server is external to the database and a mod_pgsql like mod_plsql is >> used to connect web server to database. Each page is considered as a stored >> procedure in the oracle database. I am not thinking of implementing as it is > > It's been around for a long time already: > > http://asmith.id.au/mod_libpq.html mod_libpq looks like it hasn't been updated in quite a while (apache 1.3 only) -- I think a node.js http server is superior in just about every way for this case. I 100% agree with the comments on the page though. merlin
Merlin Moncure пишет: > On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine > <dimitri@2ndquadrant.fr> wrote: >> c k<shreeseva.learning@gmail.com> writes: >>> Many users are using it and found it stable and scalable. Important is that >>> web server is external to the database and a mod_pgsql like mod_plsql is >>> used to connect web server to database. Each page is considered as a stored >>> procedure in the oracle database. I am not thinking of implementing as it is >> It's been around for a long time already: >> >> http://asmith.id.au/mod_libpq.html > mod_libpq looks like it hasn't been updated in quite a while (apache > 1.3 only) -- I think a node.js http server is superior in just about > every way for this case. I 100% agree with the comments on the page > though. > > merlin i still recommend nginx -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Merlin Moncure пишет:i still recommend nginxOn Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine
<dimitri@2ndquadrant.fr> wrote:c k<shreeseva.learning@gmail.com> writes:mod_libpq looks like it hasn't been updated in quite a while (apacheMany users are using it and found it stable and scalable. Important is thatIt's been around for a long time already:
web server is external to the database and a mod_pgsql like mod_plsql is
used to connect web server to database. Each page is considered as a stored
procedure in the oracle database. I am not thinking of implementing as it is
http://asmith.id.au/mod_libpq.html
1.3 only) -- I think a node.js http server is superior in just about
every way for this case. I 100% agree with the comments on the page
though.
merlin
--
// Dmitriy.
Dmitriy Igrishin пишет: > > > 2011/8/18 sad@bestmx.ru <mailto:sad@bestmx.ru> <sad@bestmx.ru > <mailto:sad@bestmx.ru>> > > Merlin Moncure пишет: > > On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine > <dimitri@2ndquadrant.fr <mailto:dimitri@2ndquadrant.fr>> wrote: > > c k<shreeseva.learning@gmail.com > <mailto:shreeseva.learning@gmail.com>> writes: > > Many users are using it and found it stable and > scalable. Important is that > web server is external to the database and a mod_pgsql > like mod_plsql is > used to connect web server to database. Each page is > considered as a stored > procedure in the oracle database. I am not thinking of > implementing as it is > > It's been around for a long time already: > > http://asmith.id.au/mod_libpq.html > > mod_libpq looks like it hasn't been updated in quite a while > (apache > 1.3 only) -- I think a node.js http server is superior in just > about > every way for this case. I 100% agree with the comments on > the page > though. > > merlin > > i still recommend nginx > > I recommend Wt: > http://www.webtoolkit.eu/ > :-) > it looks like feces "and uses well-tested patterns of desktop GUI development" -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Dmitriy Igrishin пишет: > > > 2011/8/18 sad@bestmx.ru <mailto:sad@bestmx.ru> <sad@bestmx.ru > <mailto:sad@bestmx.ru>> > > Merlin Moncure пишет: > > On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine > <dimitri@2ndquadrant.fr <mailto:dimitri@2ndquadrant.fr>> wrote: > > c k<shreeseva.learning@gmail.com > <mailto:shreeseva.learning@gmail.com>> writes: > > Many users are using it and found it stable and > scalable. Important is that > web server is external to the database and a mod_pgsql > like mod_plsql is > used to connect web server to database. Each page is > considered as a stored > procedure in the oracle database. I am not thinking of > implementing as it is > > It's been around for a long time already: > > http://asmith.id.au/mod_libpq.html > > mod_libpq looks like it hasn't been updated in quite a while > (apache > 1.3 only) -- I think a node.js http server is superior in just > about > every way for this case. I 100% agree with the comments on > the page > though. > > merlin > > i still recommend nginx > > I recommend Wt: > http://www.webtoolkit.eu/ > :-) > it looks like feces "and uses well-tested patterns of desktop GUI development" -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Merlin Moncure пишет: > On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine > <dimitri@2ndquadrant.fr> wrote: >> c k<shreeseva.learning@gmail.com> writes: >>> Many users are using it and found it stable and scalable. Important is that >>> web server is external to the database and a mod_pgsql like mod_plsql is >>> used to connect web server to database. Each page is considered as a stored >>> procedure in the oracle database. I am not thinking of implementing as it is >> It's been around for a long time already: >> >> http://asmith.id.au/mod_libpq.html > mod_libpq looks like it hasn't been updated in quite a while (apache > 1.3 only) -- I think a node.js http server is superior in just about > every way for this case. I 100% agree with the comments on the page > though. > > merlin i still recommend nginx -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Dmitriy Igrishin пишет:it looks like feces
2011/8/18 sad@bestmx.ru <mailto:sad@bestmx.ru> <sad@bestmx.ru <mailto:sad@bestmx.ru>><dimitri@2ndquadrant.fr <mailto:dimitri@2ndquadrant.fr>> wrote: <mailto:shreeseva.learning@gmail.com>> writes:
Merlin Moncure пишет:
On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine
Many users are using it and found it stable and
scalable. Important is that
web server is external to the database and a mod_pgsql
like mod_plsql is
used to connect web server to database. Each page is
considered as a stored
procedure in the oracle database. I am not thinking of
implementing as it is
It's been around for a long time already:
http://asmith.id.au/mod_libpq.html
mod_libpq looks like it hasn't been updated in quite a while
(apache
1.3 only) -- I think a node.js http server is superior in just
about
every way for this case. I 100% agree with the comments on
the page
though.
merlin
i still recommend nginx
I recommend Wt:
http://www.webtoolkit.eu/
:-)
"and uses well-tested patterns of desktop GUI development"
Well, have a nice coding a Web 2.0 application with nginx + PostgreSQL :-)
--
// Dmitriy.
Dmitriy Igrishin пишет: > > > 2011/8/18 sad@bestmx.ru <mailto:sad@bestmx.ru> <sad@bestmx.ru > <mailto:sad@bestmx.ru>> > > Dmitriy Igrishin пишет: > > > > 2011/8/18 sad@bestmx.ru <mailto:sad@bestmx.ru> > <mailto:sad@bestmx.ru <mailto:sad@bestmx.ru>> <sad@bestmx.ru > <mailto:sad@bestmx.ru> <mailto:sad@bestmx.ru > <mailto:sad@bestmx.ru>>> > > > Merlin Moncure пишет: > > On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine > <dimitri@2ndquadrant.fr <mailto:dimitri@2ndquadrant.fr> > <mailto:dimitri@2ndquadrant.fr > <mailto:dimitri@2ndquadrant.fr>>> wrote: > > > c k<shreeseva.learning@gmail.com > <mailto:shreeseva.learning@gmail.com> > <mailto:shreeseva.learning@gmail.com > <mailto:shreeseva.learning@gmail.com>>> writes: > > > Many users are using it and found it stable and > scalable. Important is that > web server is external to the database and a > mod_pgsql > like mod_plsql is > used to connect web server to database. Each > page is > considered as a stored > procedure in the oracle database. I am not > thinking of > implementing as it is > > It's been around for a long time already: > > http://asmith.id.au/mod_libpq.html > > mod_libpq looks like it hasn't been updated in quite a > while > (apache > 1.3 only) -- I think a node.js http server is superior > in just > about > every way for this case. I 100% agree with the comments on > the page > though. > > merlin > > i still recommend nginx > > I recommend Wt: > http://www.webtoolkit.eu/ > :-) > > it looks like feces > "and uses well-tested patterns of desktop GUI development" > > Oh oh. So unprofessional comment! > Well, have a nice coding a Web 2.0 application with nginx + PostgreSQL :-) > > -- > // Dmitriy. > > who said "web 2.0" ? i've never used religious idioms in a technical talk. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
On Thu, Aug 18, 2011 at 4:32 AM, Sim Zacks <sim@compulab.co.il> wrote: > There are many differences. > 1) If I have a database function and I copy my database to another server, > the function still works. > If I have an external daemon application, I not only have to copy my > database, I also have to copy the daemon application. Then I have to build > an init script and make sure it runs at startup. My LISTEN/NOTIFY daemon is > a c application, so when I move my database to a server on a different > platform, I have to recompile it. Ok, so you have made a decision to favor performance well ahead of flexibility. I guess the question is what the performance cost writing it in python actually is and what the flexibility cost of writing it in C actually is. Presumably you have already answered this for yourself, but this strikes me as coming out of that tradeoff rather than being inherent in the idea. > > 2) there is absolutely no reason you can't build redundancy into this > system. > > Its not a question of whether I can or cannot build redundancy, it is a > question of whether I have to build an entire system in order to call a > database function from another database function. The only reason this is > complicated is because it needs to be in its own session. That simple issue > shouldn't force me to build: a) a daemon application, b) include redundancy > to ensure that it is running, c) not be included in my database > backup/restore. Emailing IMHO isn't a database function. > Remember, I don't want to build a _system_, I basically want an asynchronous > trigger. On specific event call a database function in its own transaction > space and allow the existing transaction to end. > > 3) The overhead really shouldn't be bad, and if your parts are > well-modularized, and carefully designed overhead really should be > minimal. > > Any overhead that is not necessary should not be added in. It is the minor > level of frustration that something didn't work when I migrated servers > until the "Oh Yeah" kicked in. Then looking through all my notes to find the > compilation instructions for my daemon because we moved from a 32 bit server > to a 64 bit. Then trying to figure out the syntax for the init script, > because we moved from Gentoo to Debian and it is slightly different. It > isn't a lot of overhead but it is completely unneccessary in our situation. > I will agree that this is entirely necessary if your application actually > uses an external system and the database communicates through Listen/Notify. > You have 2 systems to deal with in any case, but for me the only external > component is having the daemon listen so it can call another function in the > database. IOW, I don't generally deal with anything else on the server. In general I would be opposed to allowing functions to exist outside of transactional control. While it is true you save some conceptual complexity in moving everything into the database, allowing stored proc functions to commit/start transactions would add a tremendous amount conceptual complexity in the database itself. At the moment I don't think this is generally worth it. The beauty of the current approach is that the transactional control works in very well-defined ways. This significantly saves testing and QA effort. I would be concerned that a capability like this would be sufficiently disruptive to the assumptions of testing, that the costs would always be far higher than the benefits. Best Wishes, Chris Travers
Dmitriy Igrishin пишет: > > > 2011/8/18 sad@bestmx.ru <mailto:sad@bestmx.ru> <sad@bestmx.ru > <mailto:sad@bestmx.ru>> > > Dmitriy Igrishin пишет: > > > > 2011/8/18 sad@bestmx.ru <mailto:sad@bestmx.ru> > <mailto:sad@bestmx.ru <mailto:sad@bestmx.ru>> <sad@bestmx.ru > <mailto:sad@bestmx.ru> <mailto:sad@bestmx.ru > <mailto:sad@bestmx.ru>>> > > > Merlin Moncure пишет: > > On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine > <dimitri@2ndquadrant.fr <mailto:dimitri@2ndquadrant.fr> > <mailto:dimitri@2ndquadrant.fr > <mailto:dimitri@2ndquadrant.fr>>> wrote: > > > c k<shreeseva.learning@gmail.com > <mailto:shreeseva.learning@gmail.com> > <mailto:shreeseva.learning@gmail.com > <mailto:shreeseva.learning@gmail.com>>> writes: > > > Many users are using it and found it stable and > scalable. Important is that > web server is external to the database and a > mod_pgsql > like mod_plsql is > used to connect web server to database. Each > page is > considered as a stored > procedure in the oracle database. I am not > thinking of > implementing as it is > > It's been around for a long time already: > > http://asmith.id.au/mod_libpq.html > > mod_libpq looks like it hasn't been updated in quite a > while > (apache > 1.3 only) -- I think a node.js http server is superior > in just > about > every way for this case. I 100% agree with the comments on > the page > though. > > merlin > > i still recommend nginx > > I recommend Wt: > http://www.webtoolkit.eu/ > :-) > > it looks like feces > "and uses well-tested patterns of desktop GUI development" > > Oh oh. So unprofessional comment! > Well, have a nice coding a Web 2.0 application with nginx + PostgreSQL :-) > > -- > // Dmitriy. > > who said "web 2.0" ? i've never used religious idioms in a technical talk. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Dmitriy Igrishin пишет:<mailto:sad@bestmx.ru <mailto:sad@bestmx.ru>> <sad@bestmx.ru
2011/8/18 sad@bestmx.ru <mailto:sad@bestmx.ru> <sad@bestmx.ru <mailto:sad@bestmx.ru>>
Dmitriy Igrishin пишет:
2011/8/18 sad@bestmx.ru <mailto:sad@bestmx.ru>
<mailto:sad@bestmx.ru> <mailto:sad@bestmx.ru<mailto:dimitri@2ndquadrant.fr
<mailto:sad@bestmx.ru>>>
Merlin Moncure пишет:
On Thu, Aug 18, 2011 at 5:48 AM, Dimitri Fontaine
<dimitri@2ndquadrant.fr <mailto:dimitri@2ndquadrant.fr>
<mailto:dimitri@2ndquadrant.fr>>> wrote:
c k<shreeseva.learning@gmail.com
<mailto:shreeseva.learning@gmail.com>
<mailto:shreeseva.learning@gmail.com
<mailto:shreeseva.learning@gmail.com>>> writes:
Many users are using it and found it stable and
scalable. Important is that
web server is external to the database and a
mod_pgsql
like mod_plsql is
used to connect web server to database. Each
page is
considered as a stored
procedure in the oracle database. I am not
thinking of
implementing as it is
It's been around for a long time already:
http://asmith.id.au/mod_libpq.html
mod_libpq looks like it hasn't been updated in quite a
while
(apache
1.3 only) -- I think a node.js http server is superior
in just
about
every way for this case. I 100% agree with the comments on
the page
though.
merlin
i still recommend nginx
I recommend Wt:
http://www.webtoolkit.eu/
:-)
it looks like feces
"and uses well-tested patterns of desktop GUI development"
Oh oh. So unprofessional comment!
Well, have a nice coding a Web 2.0 application with nginx + PostgreSQL :-)
--
// Dmitriy.
who said "web 2.0" ?
i've never used religious idioms in a technical talk.
--
// Dmitriy.
On 19 August 2011 04:16, Merlin Moncure <mmoncure@gmail.com> wrote: >> It's been around for a long time already: >> >> http://asmith.id.au/mod_libpq.html > mod_libpq looks like it hasn't been updated in quite a while (apache > 1.3 only) -- I think a node.js http server is superior in just about > every way for this case. I 100% agree with the comments on the page > though. Tad late to chime in, but mod_libpq2.c is available from the good man, too. http://asmith.id.au/source/mod_libpq2.c Compiles & installs fine on Slackware64 13.1: sudo /usr/sbin/apxs -i -a -c -I /usr/include/postgresql/ -I /usr/include/httpd -lpq -o mod_libpq.so -n MOD_LIBPQ mod_libpq2.c using postgresql 9.0.4 & apache 2.2.19 Cheers, Andrej
[edited] Merlin Moncure <mmoncure@gmail.com> writes: >>>> http://asmith.id.au/mod_libpq.html >> http://asmith.id.au/source/mod_libpq2.c > > node.js is even thinner. > > node.js is single threaded and 100% asynchronous which fits very nice > with libpq which is at heart a single threaded asynchronous library. Sure. Elnode shares this design, and yaws more seriously so. I wouldn't pick mod_libpq myself. http://nic.ferrier.me.uk/blog/2010_10/elnode http://yaws.hyber.org/ Just saying that the thin web server layer that directly hands the request to the database has been existing in PostgreSQL land for a long time already, no need to resort to other proprietary architectures here. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support
On Sat, Aug 27, 2011 at 11:43 PM, Andrej <andrej.groups@gmail.com> wrote: > On 19 August 2011 04:16, Merlin Moncure <mmoncure@gmail.com> wrote: >>> It's been around for a long time already: >>> >>> http://asmith.id.au/mod_libpq.html >> mod_libpq looks like it hasn't been updated in quite a while (apache >> 1.3 only) -- I think a node.js http server is superior in just about >> every way for this case. I 100% agree with the comments on the page >> though. > > Tad late to chime in, but mod_libpq2.c is available from the good > man, too. > http://asmith.id.au/source/mod_libpq2.c > > Compiles & installs fine on Slackware64 13.1: > sudo /usr/sbin/apxs -i -a -c -I /usr/include/postgresql/ -I > /usr/include/httpd -lpq -o mod_libpq.so -n MOD_LIBPQ mod_libpq2.c > using postgresql 9.0.4 & apache 2.2.19 Sure -- but these days if I were trying to run a setup like this through a classic web server, I think a small fastcgi wrapper is a better way to go. It's simpler, gives you a cleaner approach to hooking in your on stuff like a connection pool, and is portable across web servers. ISTM lighttpd and nginx are lighter weight and designed for this type of serving. In apache, you have mod_fastcgi which is probably a better abstraction to use than mod_libpq. You also have a much wider audience for your code if you chose to advance your wrapper as a library. node.js is even thinner. instead of running your requests through fastcgi, node.js is a simple protocol server that runs the web request in the server thread itself, including your libpq bindings, connection pool, etc. This is why as long as you are not doing to much work in the javascript itself, it could very well prove to be the most scalable solution with the lowest latency response times without resorting to a hand rolled http server in C. node.js is single threaded and 100% asynchronous which fits very nice with libpq which is at heart a single threaded asynchronous library. Client side connection pooling is trivial in single threaded engines such that a middle of the road programmer could rig up an ad hoc implementation quickly and effectively, very complex otherwise. merlin