Thread: Web Bots???
View Single Post
  #2  
Old 09-22-2008, 10:28 AM
redrat11 redrat11 is offline
Senior Member
 
Join Date: Sep 2005
Posts: 2,227
Default Re: Web Bots???



Note: I'm not really a web bot???Or am I?


___________________________
Urban Survival Weekly Report: How we're Replaying 1929


Web Bot Technology

In June 2001 I began to correspond with a reader of my website who said he was willing to share access to a promising new web technology, on the condition that I protect his identity. The person related that he had been a very senior programmer with a software company in the Pacific Northwest (you can guess which company, right?) and besides being a SQL ace, he was also heavily into linguistics and a language called Prolog, which is more like an artificial intelligence language than anything else.

I was skeptical, to be sure, but a few days after we began the email exchange of ideas, he sent me a program he had written that allows a computer to be turned into speed reading tool. It was based on rapidly displaying individual words on a computer screen. He said this was a technology that he had developed and sold for a while on the Internet. He also explained how the development rights to the technology had been sold to a company ( www.ebrainspeed.com). In essence, after looking up the patent he held for the technology, I was convinced that this fellow was for real and might be on to something with the method of looking for linguistic shift on the Internet as a tool to forecast future events.

He described how technology worked. A system of spiders, agents, and wanderers travel the Internet, much like a search engine robot, and look for particular kinds of words. It targets discussion groups, translation sites, and places were regular people post a lot of text.

When a "target word" was found, or something that was lexically similar, the web bots take a small 2048 byte snip of surrounding text and send it to a central collection point. The collected data at times approached 100 GB sample sizes and we could have used terabytes. The collected data was then filtered, using at least 7-layers of linguistic processing in Prolog, which was then reduced to numbers and then a resultant series of scatter chart plots on multiple layers of Intellicad ( http://www.cadinfo.net/icad/icadhis.htm). Viewed over a period of time, the scatter chart points tended to coalesce into highly concentrated areas. Each dot on the scatter chart might represent one word or several hundred.

To define meanings, words or groups of words have to be reduced to their essence. You know how lowest common denominators work in fractions, right? Well the process is like looking for least common denominators among groups of words.



Mirror Of Aphrodite: Web Bot And 2012 - Answers



Reply With Quote