Google wants to rank internet pages based on the quality of the facts they contain. A new paper published by a group of software engineers suggests that the internet search giant may be preparing to change the algorithms it uses to scour the web. Currently searches appear according to a complex combination of key words and links with other websites, but this fails to weed out inaccurate information. Instead a Google research team has developed a google pagerank algorithm pdf of measuring the trustworthiness of the information contained on an internet page.
Android Pay is on its way! If implemented it could mean that currently popular sources that regularly get facts wrong could fall foul of the new search technique. Not one to be outdone by its rivals, Google is reportedly working on a mobile payment service called Android Pay. The firm is expected to officially announce the service at its developer conference later this year. It will create a way for companies to accept transactions through their apps without having to introduce their own individual payment services. Android Pay users will then be able to upload credit card or debit card information to a single secure location but use it to pay for items across apps.
And customers will be able to use it to pay for in-app purchases, goods or services with a single tap. Google is also expected to allow companies to use its Android Pay API to enable tap-to-pay options in physical stores using NFC readers, for example. In a paper to be published in the Proceedings of the Very Large Database Endowment, the Google researchers said webpages would be allocated trustworthiness scores. They said: ‘Quality assessment for web sources is of tremendous importance in web search. It has been traditionally evaluated using exogenous signals such as hyperlinks and browsing history. Conversly, some less popular websites nevertheless have very accurate information. We address the fundamental question of estimating how trustworthy a given web source is.
Currently web searches are ranked by, among other things, the number of incoming links to a page to help Google’s search bots determine the quality of the link. This, however, is really only a measure of the popularity of a webpage rather than the accuracy of the information it contains. Webpages containing inaccurate information can be widely shared and linked to by blogs and other external sites, causing them to feature high up in Google search results. However, Xin Luna Dong, Wei Zhang and colleagues at Google have designed a way of automatically extracting information from webpages and ranking them for trustworthiness. They use a system they have called Knowledge-Based Trust, which pulls facts from many pages and then jointly estimates the correctness and accuracy of these. It then counts the number of incorrect facts on a page to give it a trust score. To help the software the team has developed draws on Google’s fast Knowledge Vault – a store of facts that have been pulled off the internet and are unanimously agreed on as being true.