Re: List traffic
От | Yeb Havinga |
---|---|
Тема | Re: List traffic |
Дата | |
Msg-id | 4BED83BD.90604@gmail.com обсуждение исходный текст |
Ответ на | Re: List traffic ("Marc G. Fournier" <scrappy@hub.org>) |
Список | pgsql-hackers |
Marc G. Fournier wrote: > On Fri, 14 May 2010, Yeb Havinga wrote: > >> Marc G. Fournier wrote: >>> On Thu, 13 May 2010, Alvaro Herrera wrote: >>> >>>> Excerpts from Yeb Havinga's message of jue may 13 15:06:53 -0400 2010: >>>> >>>>> My $0.02 - I like the whole 'don't sort, search' (or how did they >>>>> call >>>>> it?) just let the inbox fill up, google is fast enough. What would be >>>>> really interesting is to have some extra 'tags/headers' added to the >>>>> emails (document classification with e.g. self organizing >>>>> map/kohonen), >>>>> so my local filters could make labels based on that, instead of >>>>> perhaps >>>>> badly spelled keywords in subjects or message body. >>> >>> I missed this when I read it the first time .. all list email does >>> have an X-Mailing-List header added so that you can label based on >>> list itself ... is that what you mean, or are you thinking of >>> something else entirely? >> Something else: if automatic classification of articles was in place, >> there would be need of fewer mailing lists, depending on the quality >> of the classification. > > You've mentinoed this serveral time, but what *is* "autoclassication > of articles"? or is this something you do on the gmail side of things? I ment classification in the sense of automated as apposed to manual classification by author or subscriber, in the general sense, not linked to any mail client or server. Example: junk mail detection by mail client. After sending my previous mail this morning I looked a bit more into (the faq of) carrot2, which links to LingPipe as a solution for people that like pre-defined classes. LingPipe in fact has a tutorial where they classify a dataset of newsgroups articles, see e.g. http://alias-i.com/lingpipe/demos/tutorial/classify/read-me.html. I suppose it would be interesting to see what could be done with the pg archives. If the archive database itself is publically available, or could be made available I'd be willing to put some time into this (solely on the bases that I'm interested in the outcome, not that I pursue that it'd be used by the pg project, though that'd ofcourse be cool if it turned out that way in the end) regards, Yeb Havinga
В списке pgsql-hackers по дате отправления: