A huge portion of today’s Web consists of web pages filled with information from myriads of online databases. This part of the Web, known as the deep Web, is to date relatively unexplored and even major characteristics such as a number of searchable databases on the Web or databases’ subject distribution are somewhat disputable.
In this paper, we revisit a problem of deep Web characterization: how to estimate the total number of online databases on the Web? We propose the Host-IP clustering sampling method to address the drawbacks of existing approaches for deep Web characterization and report our findings based on the survey of Russian Web.
Obtained estimates together with a proposed sampling technique could be useful for further studies to handle data in the deep Web.
The deep Web, the huge part of the Web consisting of web pages accessible via web search forms (or search interfaces), is poorly crawled and thus invisible to current-day web search engines.
Though the problems with crawling dynamic web content hidden behind form-based search interfaces were evident as early as 2000, the deep Web is still not adequately characterized and its key parameters (e.g., the total number of deep web sites and web databases, the overall size of the deep Web, the coverage of the deep Web by conventional search engines, etc.) can only be guessed.