That's a good idea, I'll have to play around and see what we can do.
In your opinion would a Linux server be able to handle this setup? Would
1000 connections/processes be a problem on Linux?
Thanks,
*Nathan Mascitelli*
Geotab Inc.
Software Developer | B. Eng Engineering Physics
[Direct] *+1 (289) 681-1005*
[Toll-Free] *+1 (877) 436-8221*
[Visit] www.geotab.com
Twitter <http://twitter.com/geotab> | Facebook
<http://www.facebook.com/geotab> | YouTube <http://www.youtube.com/mygeotab>
| LinkedIn
<http://www.linkedin.com/company/102661?trk=tyah&trkInfo=tarId:1403199250031,tas:geotab,idx:2-1-3>
On Wed, Apr 20, 2016 at 11:21 AM, John R Pierce <pierce@hogranch.com> wrote:
> On 4/20/2016 8:17 AM, Nathan Mascitelli wrote:
>
>> We are using a connection pooler. On average we see 2-5 connections per
>> database. But it sounds like we would need to either collect connections
>> more aggressively or lower the number of databases/server correct?
>>
>
>
> depending on the use case for these 300 different databases, perhaps they
> could be consolidated into 'schemas' within a smaller number of databases.
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>