Friday, March 24, 2006

Custom user login screen in FC4

1. edit /etc/sysconfig/desktop

2. add the line - DISPLAYMANAGER="KDE"

3. edit /etc/kde/kdm/kdmrc
change the line 'UseTheme=true' to 'UseTheme=false'

4. Search for the line 'AllowRootLogin=false'
change 'false' to 'true'
Note: the above mentioned line will appear twice in the file. edit both the lines.

5. Reboot and login to KDE as root

6. Click k-menu -> control center. click on system administration in the control center window. select login manager.

7. Here you are! change the appearance, font and background as per your wish. Logout and see your custom login screen. Njoy ;)

Thursday, March 23, 2006

How Internet Search Engines Work

Internet search engines are special sites on the Web that are designed to help people find information stored on other sites. There are differences in the ways various search engines work, but they all perform three basic tasks:

  • They search the Internet -- or select pieces of the Internet -- based on important words.
  • They keep an index of the words they find, and where they find them.
  • They allow users to look for words or combinations of words found in that index.

Early search engines held an index of a few hundred thousand pages and documents, and received maybe one or two thousand inquiries each day. Today, a top search engine will index hundreds of millions of pages, and respond to tens of millions of queries per day. In this article, we'll tell you how these major tasks are performed, and how Internet search engines put the pieces together in order to let you find the information you need on the Web.

Looking at the Web

Searches Per Day:
Top 5 Engines
  • Google - 250 million
  • Overture - 167 million
  • Inktomi - 80 million
  • LookSmart - 45 million
  • FindWhat - 33 million
*Source: SearchEngineWatch.com, Feb. 2003
When most people talk about Internet search engines, they really mean World Wide Web search engines. Before the Web became the most visible part of the Internet, there were already search engines in place to help people find information on the Net. Programs with names like "gopher" and "Archie" kept indexes of files stored on servers connected to the Internet, and dramatically reduced the amount of time required to find programs and documents. In the late 1980s, getting serious value from the Internet meant knowing how to use gopher, Archie, Veronica and the rest.

Today, most Internet users limit their searches to the Web, so we'll limit this article to search engines that focus on the contents of Web pages.

An Itsy-Bitsy Beginning
Before a search engine can tell you where a file or document is, it must be found. To find information on the hundreds of millions of Web pages that exist, a search engine employs special software robots, called spiders, to build lists of the words found on Web sites. When a spider is building its lists, the process is called Web crawling. (There are some disadvantages to calling part of the Internet the World Wide Web -- a large set of arachnid-centric names for tools is one of them.) In order to build and maintain a useful list of words, a search engine's spiders have to look at a lot of pages.

How does any spider start its travels over the Web? The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web.


"Spiders" take a Web page's content and create key search words that enable online users to find pages they're looking for.

Google.com began as an academic search engine. In the paper that describes how the system was built, Sergey Brin and Lawrence Page give an example of how quickly their spiders can work. They built their initial system to use multiple spiders, usually three at one time. Each spider could keep about 300 connections to Web pages open at a time. At its peak performance, using four spiders, their system could crawl over 100 pages per second, generating around 600 kilobytes of data each second.

Keeping everything running quickly meant building a system to feed necessary information to the spiders. The early Google system had a server dedicated to providing URLs to the spiders. Rather than depending on an Internet service provider for the domain name server (DNS) that translates a server's name into an address, Google had its own DNS, in order to keep delays to a minimum.

When the Google spider looked at an HTML page, it took note of two things:

  • The words within the page
  • Where the words were found

Words occurring in the title, subtitles, meta tags and other positions of relative importance were noted for special consideration during a subsequent user search. The Google spider was built to index every significant word on a page, leaving out the articles "a," "an" and "the." Other spiders take different approaches.

These different approaches usually attempt to make the spider operate faster, allow users to search more efficiently, or both. For example, some spiders will keep track of the words in the title, sub-headings and links, along with the 100 most frequently used words on the page and each word in the first 20 lines of text. Lycos is said to use this approach to spidering the Web.

Other systems, such as AltaVista, go in the other direction, indexing every single word on a page, including "a," "an," "the" and other "insignificant" words. The push to completeness in this approach is matched by other systems in the attention given to the unseen portion of the Web page, the meta tags.

Meta Tags
Meta tags allow the owner of a page to specify key words and concepts under which the page will be indexed. This can be helpful, especially in cases in which the words on the page might have double or triple meanings -- the meta tags can guide the search engine in choosing which of the several possible meanings for these words is correct. There is, however, a danger in over-reliance on meta tags, because a careless or unscrupulous page owner might add meta tags that fit very popular topics but have nothing to do with the actual contents of the page. To protect against this, spiders will correlate meta tags with page content, rejecting the meta tags that don't match the words on the page.

All of this assumes that the owner of a page actually wants it to be included in the results of a search engine's activities. Many times, the page's owner doesn't want it showing up on a major search engine, or doesn't want the activity of a spider accessing the page. Consider, for example, a game that builds new, active pages each time sections of the page are displayed or new links are followed. If a Web spider accesses one of these pages, and begins following all of the links for new pages, the game could mistake the activity for a high-speed human player and spin out of control. To avoid situations like this, the robot exclusion protocol was developed. This protocol, implemented in the meta-tag section at the beginning of a Web page, tells a spider to leave the page alone -- to neither index the words on the page nor try to follow its links.

Building the Index

Once the spiders have completed the task of finding information on Web pages (and we should note that this is a task that is never actually completed -- the constantly changing nature of the Web means that the spiders are always crawling), the search engine must store the information in a way that makes it useful. There are two key components involved in making the gathered data accessible to users:
  • The information stored with the data
  • The method by which the information is indexed

In the simplest case, a search engine could just store the word and the URL where it was found. In reality, this would make for an engine of limited use, since there would be no way of telling whether the word was used in an important or a trivial way on the page, whether the word was used once or many times or whether the page contained links to other pages containing the word. In other words, there would be no way of building the ranking list that tries to present the most useful pages at the top of the list of search results.

To make for more useful results, most search engines store more than just the word and URL. An engine might store the number of times that the word appears on a page. The engine might assign a weight to each entry, with increasing values assigned to words as they appear near the top of the document, in sub-headings, in links, in the meta tags or in the title of the page. Each commercial search engine has a different formula for assigning weight to the words in its index. This is one of the reasons that a search for the same word on different search engines will produce different lists, with the pages presented in different orders.

Regardless of the precise combination of additional pieces of information stored by a search engine, the data will be encoded to save storage space. For example, the original Google paper describes using 2 bytes, of 8 bits each, to store information on weighting -- whether the word was capitalized, its font size, position, and other information to help in ranking the hit. Each factor might take up 2 or 3 bits within the 2-byte grouping (8 bits = 1 byte). As a result, a great deal of information can be stored in a very compact form. After the information is compacted, it's ready for indexing.

An index has a single purpose: It allows information to be found as quickly as possible. There are quite a few ways for an index to be built, but one of the most effective ways is to build a hash table. In hashing, a formula is applied to attach a numerical value to each word. The formula is designed to evenly distribute the entries across a predetermined number of divisions. This numerical distribution is different from the distribution of words across the alphabet, and that is the key to a hash table's effectiveness.

In English, there are some letters that begin many words, while others begin fewer. You'll find, for example, that the "M" section of the dictionary is much thicker than the "X" section. This inequity means that finding a word beginning with a very "popular" letter could take much longer than finding a word that begins with a less popular one. Hashing evens out the difference, and reduces the average time it takes to find an entry. It also separates the index from the actual entry. The hash table contains the hashed number along with a pointer to the actual data, which can be sorted in whichever way allows it to be stored most efficiently. The combination of efficient indexing and effective storage makes it possible to get results quickly, even when the user creates a complicated search.

Building a Search

Searching through an index involves a user building a query and submitting it through the search engine. The query can be quite simple, a single word at minimum. Building a more complex query requires the use of Boolean operators that allow you to refine and extend the terms of the search.

The Boolean operators most often seen are:

  • AND - All the terms joined by "AND" must appear in the pages or documents. Some search engines substitute the operator "+" for the word AND.
  • OR - At least one of the terms joined by "OR" must appear in the pages or documents.
  • NOT - The term or terms following "NOT" must not appear in the pages or documents. Some search engines substitute the operator "-" for the word NOT.
  • FOLLOWED BY - One of the terms must be directly followed by the other.
  • NEAR - One of the terms must be within a specified number of words of the other.
  • Quotation Marks - The words between the quotation marks are treated as a phrase, and that phrase must be found within the document or file.

Future Search

The searches defined by Boolean operators are literal searches -- the engine looks for the words or phrases exactly as they are entered. This can be a problem when the entered words have multiple meanings. "Bed," for example, can be a place to sleep, a place where flowers are planted, the storage space of a truck or a place where fish lay their eggs. If you're interested in only one of these meanings, you might not want to see pages featuring all of the others. You can build a literal search that tries to eliminate unwanted meanings, but it's nice if the search engine itself can help out.

One of the areas of search engine research is concept-based searching. Some of this research involves using statistical analysis on pages containing the words or phrases you search for, in order to find other pages you might be interested in. Obviously, the information stored about each page is greater for a concept-based search engine, and far more processing is required for each search. Still, many groups are working to improve both results and performance of this type of search engine. Others have moved on to another area of research, called natural-language queries.

The idea behind natural-language queries is that you can type a question in the same way you would ask it to a human sitting beside you -- no need to keep track of Boolean operators or complex query structures. The most popular natural language query site today is AskJeeves.com, which parses the query for keywords that it then applies to the index of sites it has built. It only works with simple queries; but competition is heavy to develop a natural-language query engine that can accept a query of great complexity.

Wednesday, March 15, 2006

What are FrontPage Server Extensions?

The FrontPage Server Extensions are a set of programs on the Web server that support:

  • Authoring FrontPage webs. For example, when an author moves a page from one folder to another in a FrontPage web, the Server Extensions automatically update all hyperlinks to that page from every other page and Microsoft Office document in the FrontPage web, directly on the Web server machine.
  • Administering FrontPage webs. For example, a FrontPage web administrator can specify which users can administer, author or browse a FrontPage web.
  • Browse-time FrontPage web functionality. For example, users of a FrontPage web can participate in a discussion group. The Server Extensions will maintain an index of hyperlinks to articles in the discussion, separate discussion threads, tables of contents, and search forms to locate pages of interest.

A FrontPage web is a project containing all the pages, images, and other files that make up a Web site. For a full description of FrontPage webs, see FrontPage Webs.

The design of the FrontPage client and Server Extensions minimizes the need for costly file transfers over the Internet. When an author using the FrontPage Explorer opens a FrontPage web from a Web server containing the Server Extensions, information about the FrontPage web, such as its hyperlink map, is downloaded to the client machine so that the FrontPage Explorer can display the information. However, the full set of pages and other files that comprise the FrontPage web remain on the Web server machine. A page is only downloaded over the Internet when it is opened for editing in the FrontPage Editor. This is a very efficient mechanism: an entire Web site can be changed directly on a Web server at the cost of downloading and editing a single file.

When a Web server machine has the FrontPage Server Extensions, FrontPage web authoring and administering functionality is available from a PC or Macintosh computer that has the FrontPage client program and that is on the Internet or an a local Intranet. The browse time functionality of the Server Extensions is available from any Web browser on the Internet or Intranet.

Communications between a client computer and a Web server containing the Server Extensions uses the same open, ubiquitous HTTP protocol that Web browsers on a client computer use to interact with a Web server. No file-sharing access on the Web server machine is needed, nor are FTP or telnet access required. No proprietary file system sharing calls are necessary.

The Server Extensions are designed to work with any standard Web server using the Common Gateway Interface (CGI), the near-universal Web server extension mechanism. This includes freeware and shareware servers such as those from Apache, CERN and NCSA, and commercial web servers from Netscape, Microsoft, and O’Reilly and Associates. The Server Extensions are designed to be easily ported to all popular hardware and software platforms for cross-platform Web server compatibility. See FrontPage Server Extensions: Supported Platforms for a complete list of the operating systems and Web servers for which the Server Extensions are available.

On Windows Web servers, the Server Extensions are integrated with Microsoft Visual SourceSafe and support version control and check-ins and check-outs of files from the Web server.

The Server Extensions are also used by Microsoft Visual InterDev in the same way that they are used by Microsoft FrontPage.

FrontPage Webs

FrontPage works with World Wide Web content by managing FrontPage webs. You can think of a FrontPage web as a project. It contains all the pages, images, and other files that make up a Web site. Authors can create, delete, open, and close FrontPage webs using the FrontPage Explorer and FrontPage Editor on a client computer. A FrontPage web can be stored on a remote Web server computer, a Web server running on the same computer as the client program, or in the client computer's file system.

Many of the features of a FrontPage web require the FrontPage Server Extensions to be on the server containing the FrontPage web. Some of the features of FrontPage webs that are supported by the FrontPage Server Extensions are:

  • A full hyperlink map of all files in a FrontPage web. The FrontPage Explorer displays hyperlinks using this hyperlink map. When a FrontPage web is copied from one Web server to another, the entire hyperlink map is recalculated.
  • A full-text index of all Web pages in a FrontPage web. This lets end-users search a FrontPage web for pages containing words or phrases.
  • A persistent structure that authors can create and manipulate. This structure defines the key pages in a FrontPage web and the relationships among these pages. Authors operate on the structure of a FrontPage web in the FrontPage Explorer. When the structure of a FrontPage web is changed, affected pages are updated to reflect the changes.
  • Web themes. A theme is a set of color-coordinated page elements, including background colors, text colors, bullets, borders, and horizontal lines. By applying a theme to a FrontPage web, an author can easily give a FrontPage web a consistent, attractive appearance. When a new theme is applied to a FrontPage web, all pages are automatically updated to use it.
  • A Tasks list containing the tasks needed to complete a FrontPage web. Tasks are linked to the pages on which they occur.
  • Unique security settings. Each FrontPage web can be made available to a different group of administrators, authors and end-users.

FrontPage supports two kinds of FrontPage webs: root webs and sub-webs. A root web is a FrontPage web that is the top level content directory of a Web server or, in a multihosting environment, of a virtual Web server. It can have many levels of subdirectories, containing it’s content. There can be only one root web per Web Server or virtual Web server.

A single root web can support a number of sub-webs. A sub-web is a complete FrontPage web that is a subdirectory of the root web. Sub-webs can only exist one level below the root web. Each sub-web can have many levels of subdirectories, making up its content. Sub-webs cannot have sub-webs.

Even though sub-webs appear below the root web in the Web server's file system and URL space, the root web does not include the content in its sub-webs. This separation of content is done by the FrontPage Server Extensions.

The root web and all sub-webs on a server have separate copies of the Server Extensions installed, or have stub executables of the Server Extensions programs. Having separate Server Extensions copies for each FrontPage web lets the Web server enforce different end-user, author, and administrator permissions on each FrontPage web, since FrontPage uses the Web server's built-in security mechanism to control access.

FrontPage webs can be implemented on a Web server and accessed by Web browsers in the following ways:

  • As private domain names, such as "www.mycompany.com." These are usually implemented as virtual servers on the same physical server machine using multihosting. Private domain name customers each get their own root web and have the option of creating sub-webs.
  • As a common or shared domain but with private virtual servers, as in "www.mycompany.myprovider.com," where "myprovider.com" is a shared domain and "www.mycompany" is a private virtual server. Private virtual server customers on a shared domain each get their own root web and have the option of creating sub-webs.
  • As a URL on an Internet service provider's server machine, as in "www.myprovider.com/mycompany." URL customers get a single sub-web.

FrontPage Authoring Support

In FrontPage, authors create Web pages and entire Web sites using FrontPage on a client computer (a PC or Macintosh). The FrontPage client programs are the FrontPage Explorer and the FrontPage Editor.

  • The FrontPage Explorer is the FrontPage tool for creating, designing, viewing, maintaining, and publishing FrontPage webs. Various views in the FrontPage Explorer provide different ways of looking at and modifying the contents of a FrontPage web.

  • The FrontPage Editor is the FrontPage tool for creating, editing, and testing World Wide Web pages. As an author adds text, tables, forms, images, controls and other elements to a page, the FrontPage Editor displays it in WYSIWYG style, as it would appear in a Web browser. The FrontPage Editor is fully integrated with the FrontPage Explorer.

Much of the FrontPage Explorer and Editor's functionality is supported by the FrontPage Server Extensions. Some examples are:

  • Saving documents to FrontPage webs.
  • Creating, copying, and publishing FrontPage webs.
  • The Tasks view, containing a list of tasks needed to complete a FrontPage web.
  • FrontPage web-structure editing. An author defines the structure of a FrontPage web in the FrontPage Explorer's Navigation View and inserts navigation bars in the FrontPage Editor. FrontPage navigation bars automatically create the hyperlinks that express the FrontPage web's structure. If the author changes the structure in the Navigation View, all FrontPage navigation bars automatically update these hyperlinks.
  • FrontPage components (also called WebBot components). FrontPage includes a rich set of active components that update pages when a change occurs in the FrontPage web. For example, the Table of Contents component keeps an updated table of contents of the entire FrontPage web. When an author moves a page, the table of contents is updated. Another FrontPage component, the Include component, inserts the contents of one page into another. If the inserted page changes, all pages that include it are automatically updated.
  • Hyperlink map. A FrontPage web's hyperlink map is browsable in the FrontPage Explorer's Hyperlink View. Using this map, FrontPage updates affected hyperlinks in the FrontPage web when a page is moved or renamed.
  • Themes. Authors select FrontPage web themes in the FrontPage Explorer, or they can apply a theme to a single page in the FrontPage Editor. When the theme for a FrontPage web changes, FrontPage automatically updates every page in the FrontPage web to use the new theme.

FrontPage Administrative Support

The FrontPage Server Extensions provides a set of web-administration tools that can be used remotely from the FrontPage Explorer. These tools provide access control and general Web-administration functionality. The Server Extensions support three levels of access control of FrontPage webs: administrator, author, and browser.

  • Administering permission gives a user, group of users, or a computer permission to administer the FrontPage web.
  • Authoring permission gives a user, group of users, or a computer permission to open the FrontPage web in the FrontPage Explorer and edit its pages and files.
  • Browsing permission gives a user, group of users, or a computer permission to browse the FrontPage web when it is published on the Internet or on an intranet.

For a full discussion of FrontPage Server Extension administrative capabilities, along with a general discussion of Server Extensions security issues, see The FrontPage Server Extensions: Security Considerations.

FrontPage Browse-time Support

FrontPage browse-time support occurs when a user views a page in a FrontPage web from a Web browser. Browse-time support is implemented in the FrontPage Server Extensions as FrontPage components (also called WebBot components).

A FrontPage component is an active object that is inserted on an HTML page using the FrontPage Editor. It has a persistent state that is encoded as HTML comments. FrontPage components typically produce as their output HTML that is inserted in the surrounding HTML page. FrontPage components can be active at authoring-time, while the FrontPage Editor and Explorer are in use, or at browse-time. For example, the Include component is an authoring-time component that includes the contents of one page in another.

Some browse-time FrontPage components are:

Search Form
The Search Form uses the full text index created by the Server Extensions. It appears as a form on a page. When a user submits a Search Form containing words to locate, the Search Form returns hyperlinks to all pages in a FrontPage web that contain the words.
E-mail Form Handler
The E-mail Form Handler gathers information from a form, formats the information, and sends it to an e-mail address.
Discussion Form Handler
The Discussion Form Handler lets users participate in an online discussion. It collects information from a form, formats it into an HTML page, and adds the page to a table of contents and to a text index.

When a user browses an HTML page containing a browse-time FrontPage component, the Server Extensions do whatever processing is required and then generate a new HTML page to display the results of the operation. For example, a Search Form will generate an HTML list of hyperlinks to pages and an E-mail Form Handler will generate a page confirming that a form's contents have been processed and sent to an e-mail address.

An HTML page with no browse-time FrontPage components does not use the Server Extensions when a user browses the page. Instead, the normal Web server page-retrieval process occurs.

Publishing FrontPage Webs

Publishing a FrontPage web means making the FrontPage web available to users for browsing from a Web server, either on an intranet or on the Internet. The Internet and intranet cases typically use different methods of publishing.

In Internet publishing, the most common method is for an author to create a FrontPage web on a Web server installed on the client computer. (When the FrontPage client program is installed, FrontPage optionally installs a Web server with the Server Extensions on the client computer.) Then, when the FrontPage web is completed and tested, the author publishes it to an Internet service provider's Web server using the FrontPage Explorer's Publish FrontPage Web command. Authoring on a local Web server is efficient because it does not require an author to be connected to an Internet service provider while working on a FrontPage web.

The Publish FrontPage Web command copies the FrontPage web from a source (desktop) Web server to a destination (production) Web server in batch mode. Only new or changed pages and files are copied by default. Pages and files deleted from the source FrontPage web are also deleted from the destination web.

When an author publishes a FrontPage web using the Publish FrontPage Web command, the home page is renamed, if necessary, to match the naming-convention on the destination Web server. Also, all FrontPage components in the FrontPage web are regenerated to take advantage of platform-specific functionality. For example when a FrontPage web is published to an IIS server containing Microsoft Index Server, any Search Forms are configured to use the Index Server.

In Intranet publishing, the most common method does not require the Publish FrontPage Web command. Instead, an author works directly on a Web server machine on an internal network, which is typically used to share information inside an organization. In this method, whenever a page is opened from the Web server and edited, the change is published to the intranet when it is saved from the FrontPage Editor.

FrontPage Product Architecture

The FrontPage client system communicates with a Web server via the FrontPage Explorer. The library dedicated to communicating from the client is the WEC (Web Extender Client), a private FrontPage library. This library communicates via WinSock and TCP/IP. Wizards and custom applications on the client communicate with the Editor and Explorer using OLE automation.

The FrontPage client tools communicate with the Server Extensions using HTTP, the same protocol used to communicate between Web browsers and Web servers. FrontPage implements a remote procedure call (RPC) mechanism on top of the HTTP "POST" request, so that the FrontPage client can request documents, update the Tasks list, add new authors, and so on. The Web server sees "POST" requests addressed to the Server Extensions CGI programs and directs those requests accordingly. FrontPage correctly communicates between client and server through firewalls (proxy servers). FrontPage does not use or require the HTTP "PUT" request. As described in the HTTP specification, "PUT" sends a document to a Web server. However, few Web servers implement "PUT." Therefore, FrontPage uses the universally-implemented HTTP "POST" request for all communications with the Server Extensions.

For most web servers, the Server Extensions are accessed by the Web server using the Common Gateway Interface (CGI), the near-universal Web server extension mechanism. The implementation of CGI differs somewhat among Web servers and platforms. For example, most Unix web servers invoke a CGI extension by running it in a separate fork, whereas some Windows NT web servers support a Dynamic Link Library (DLL) variant of CGI-style communication that incurs less overhead. But the information flow is similar for all CGI implementations: user-driven values and environment parameters are passed to the CGI extension using a block of name/value pairs, and the CGI extension program returns a result in HTML format.

The Server Extensions are divided into three libraries:

  • admin.dll for FrontPage web administration
  • author.dll for FrontPage web authoring support
  • shtml.dll for browse-time support

FrontPage Extensibility

FrontPage is extensible in the following areas:

  • Web wizards and page wizards
  • Web themes and page themes
  • menu commands
  • FrontPage components

The Microsoft FrontPage Software Developer's Kit, which is included on the FrontPage CD-ROM in the \SDK folder, contains full documentation for adding these features, including examples and sample code.

FrontPage Client and Server Extensions Compatibility

The FrontPage Server Extensions exist for FrontPage client versions 1.1, FrontPage 97, and FrontPage 98. Each FrontPage client release is accompanied by a new Server Extensions release that supports the new features of the client. For example, the current release, FrontPage 98, is accompanied by a new FrontPage 98 Server Extensions release. It is always most effective to use the most up-to-date versions of the FrontPage client and the Server Extensions.

Each new release of the Server Extensions is backward compatible with previous FrontPage client versions so that it continues to support the client's functionality at each earlier level. For example, a FrontPage 97 client can open and edit a FrontPage web from a Web server that has the FrontPage 98 Server Extensions installed with no loss of functionality in the FrontPage 97 client. However, the client will not be able to access new Server Extensions functionality added for the FrontPage 98 client such as applying themes to a FrontPage web or creating and saving a FrontPage web structure.

A FrontPage client can also open and edit FrontPage webs on a Web server containing an earlier version of the Server Extensions. However, the client will not be able to use its newer features, because they will not be supported by the earlier version of the Server Extensions.

Bug fixes and patches are occasionally issued for the current released version of the Server Extensions . Older versions of the Server Extensions do not receive these occasional updates. However, patches to the current version of the Server Extensions will work with earlier versions of the FrontPage client.

FrontPage Server Extensions: Supported Platforms

The following table provides the complete list of operating systems and Web servers for which the FrontPage Server Extensions are available.

On the following operating systems: The FrontPage Server Extensions
are available for the following
Web servers:
UNIX
Digital UNIX 3.2c, 4.0 (Alpha)
BSD/OS 2.1 (Intel x86)
BSD/OS 3.0 (Intel x86)
Linux 3.03 (Red Hat Software) (Intel x86)
HP/UX 9.03, 10.01 (PA-RISC)
IRIX 5.3, 6.2 (Silicon Graphics)
Solaris 2.4, 2.5 (SPARC)
SunOS 4.1.3, 4.1.4 (SPARC)
AIX 3.2.5, 4.1, 4.2 (RS6000, PowerPC)
SCO OpenServer5.0 (Intel X86)
Apache 1.1.3, 1.2
CERN 3.0
NCSA 1.5.2 (we do not support 1.5a 0r 1.5.1)
Netscape Commerce Server 1.12
Netscape Communications Server 1.12
Netscape Enterprise Server 2.0, 3.0
Netscape FastTrack 2.0

Windows NT Server, Intel x86
Windows NT Workstation, Intel x86

Internet Information Server 2.0 or higher, including IIS 4.0
Netscape Commerce Server 1.12
Netscape Communications Server 1.12
Netscape Enterprise Server 2.0, 3.0
Netscape FastTrack 2.0
O'Reilly WebSite
FrontPage Personal Web Server
Windows NT Server, Alpha
Windows NT Workstation, Alpha
Microsoft Internet Information Server 2.0 or higher, including IIS 4.0
Microsoft Peer Web Services (on Windows NT Workstation)
Windows 95 Microsoft Personal Web Server (on Windows 95)
FrontPage Personal Web Server
Netscape FastTrack 2.0
O'Reilly WebSite

Thursday, March 09, 2006

LG.Philips Develops 100-Inch LCD




LG.Philips LCD Wednesday took the wraps off a 100-inch thin film transistor liquid crystal display (TFT-LCD) panel, which the company claims is the largest in the world.

The model developed by the world’s runner-up LCD producer is about 1.5 times bigger than the previously largest 82-inch product of Samsung Electronics, the global top player.

``Our development of the 100-inch LCD panel reaffirms LG.Philips LCD is the global leader in large-area LCD technology,’’ the firm’s vice president Yeo Sang-deog said.

``Technological advances for large-area LCD TVs, such as the 100-inch LCD, will act as a catalyst that accelerates demands for high-quality and large screens,’’ he added.

Developed at the company’s seventh-generation production lines at Paju, Kyonggi Province, the high-feature panel is a wide screen (16:9) with its width and height amounting to 2.2 meters and 1.2 meters, respectively.

The high-definition model, which offers 6.22 million pixels and can produce 1.07 billion colors, boasts a response speed faster than 5 milliseconds.

That means the amount of time it takes for the LCD TV’s liquid crystal cell to go from black to white is 5 milliseconds, lower than previous norm of double-digit milliseconds.

Lower numbers represent faster transitions and therefore less visible image artifacts. Monitors will not create a smear or blur pattern around moving objects.

The LCD panel of LG.Philips LCD, the joint venture between LG Electronics and Royal Philips Electronics of the Netherlands, also has a maximum 3,000:1 contrast ratio and 180-degree viewing angle.

The contrast ratio means that the brightest color on the screen is 3,000 times brighter than the darkest color that the panel is capable of displaying simultaneously. The higher the ratio is, the better the display is.

In addition, the wide viewing angle shows that the images on the monitor will be vivid to watchers at any angle.

The 100-inch model is expected to maintain the summit place for the time being because Samsung Electronics, the cross-town rival of LG.Philips LCD, has no scheme to challenge the product.

Samsung, which developed the previously biggest 82-inch LCD panel last year, has been touted as arguably the only candidate posing a threat to LG.Philips for biggest LCD.

``We are not researching any LCD panel larger than 82 inches diagonally and have no plan to develop at the moment,’’ Samsung spokesman Shin Young-jun said.

The remark sharply contrasts to that of Samsung’s executive vice president, Kim Sang-soo, who expressed confidence in producing mega-sized LCD at an unveiling event of the 82-inch item in March 2005.

``Making a 97-inch model is just a matter of time. There is virtually no technical limitation for producing LCD larger than 82 inches,’’ Kim said at the time.

Battle for Biggest LCD

LG.Philips originally took the driver’s seat in the battle for the biggest LCD by creating a 52-inch panel in December 2002 and a 55-inch one in October 2003 for the first time in history.

Samsung then surprised the world with a 57-inch LCD in December 2003 and a 82-inch product in March 2005.

LG.Philips took the upper hand once again with the 100-inch item, previously regarded as impossible for relatively small size-specific LCD in comparison to the plasma display panel (PDP).

LCD is the first offspring of the flat-screen family, which eroded the long-time dominance of the bulky cube-based monitor that causes eye strain and consumes a lot of power.

As the technology opened the door to flat-panel displays with outstanding advantages, another high-end screen PDP was also brought into the game.

Unlike the fat cube-based TVs, both LCD and PDP are of sleek appearance as they show images via liquid crystal or plasma, which are sandwiched between two thin glass plates.

Technologically, PDP is suitable for large-sized screens since it is difficult to trap plasma between two small plates. By contrast, LCD does not go well with large monitors due to the properties of liquid crystal.

As a result, experts have expected LCD would be the mainstream product for the small screen while PDP would be predominant in the market for screens larger than 40-inches.

However, the uphill battle between Korea’s dynamic duo _ LG.Philips and Samsung _ has worked in the favor of LCD by trimming its price and adding seamless technological advances.

The 100-inch LCD is merely three inches shy of the biggest 103-inch PDP monitor, unveiled by Japan’s Panasonic earlier this year.

LCD prices halved last year to the level of LCD thanks to technological progresses and rivalry in the 40-plus inch LCD panels market, the major battlefield between the two flat-panel products,.

The prices of large-area LCD panels are likely to drop rapidly this year and beyond, while PDP prices will most probably fall at a snail’s pace.

Market observers predict LCD will maintain its competitiveness in even 50-inch display markets, which they initially thought would be flatly dominated by PDP.