Wednesday, March 25, 2009

Trick To Increase Browsing Speed for IE and Firefox

I am going to tell the tricks which increase the Firefox and IE Speed.

Trick to Increase Firefox Speed

1. Open Firefox and in the address bar write about:config and press enter
2. Double click network.http.pipelining and set it to True
3. Double click network.http.pipelining.maxrequests and set value to 10 from 4
4. Right click and create a new string nglayout.initialpaint.delay and set its value to 0

You are done. Enjoy lightning fast Firefox browsing and now for IE.

Trick to Increase Internet Explorer Speed

1. Go to Start –> Run and type regedit
2. Select HKEY_CURRENT_USER –> Software –> Microsoft –> Windows –> Current Version –> Internet Settings
3. Increase the values (DECIMAL) from default to a higher value e.g. 10

Thursday, March 19, 2009

How FriendFeed uses MySQL to store schema-less data

Background

We use MySQL for storing all of the data in FriendFeed. Our database has grown a lot as our user base has grown. We now store over 250 million entries and a bunch of other data, from comments and "likes" to friend lists.

As our database has grown, we have tried to iteratively deal with the scaling issues that come with rapid growth. We did the typical things, like using read slaves and memcache to increase read throughput and sharding our database to improve write throughput. However, as we grew, scaling our existing features to accomodate more traffic turned out to be much less of an issue than adding new features.

In particular, making schema changes or adding indexes to a database with more than 10 - 20 million rows completely locks the database for hours at a time. Removing old indexes takes just as much time, and not removing them hurts performance because the database will continue to read and write to those unused blocks on every INSERT, pushing important blocks out of memory. There are complex operational procedures you can do to circumvent these problems (like setting up the new index on a slave, and then swapping the slave and the master), but those procedures are so error prone and heavyweight, they implicitly discouraged our adding features that would require schema/index changes. Since our databases are all heavily sharded, the relational features of MySQL like JOIN have never been useful to us, so we decided to look outside of the realm of RDBMS.

Lots of projects exist designed to tackle the problem storing data with flexible schemas and building new indexes on the fly (e.g., CouchDB). However, none of them seemed widely-used enough by large sites to inspire confidence. In the tests we read about and ran ourselves, none of the projects were stable or battle-tested enough for our needs (see this somewhat outdated article on CouchDB, for example). MySQL works. It doesn't corrupt data. Replication works. We understand its limitations already. We like MySQL for storage, just not RDBMS usage patterns.

After some deliberation, we decided to implement a "schema-less" storage system on top of MySQL rather than use a completely new storage system. This post attempts to describe the high-level details of the system. We are curious how other large sites have tackled these problems, and we thought some of the design work we have done might be useful to other developers.

Overview

Our datastore stores schema-less bags of properties (e.g., JSON objects or Python dictionaries). The only required property of stored entities is id, a 16-byte UUID. The rest of the entity is opaque as far as the datastore is concerned. We can change the "schema" simply by storing new properties.

We index data in these entities by storing indexes in separate MySQL tables. If we want to index three properties in each entity, we will have three MySQL tables - one for each index. If we want to stop using an index, we stop writing to that table from our code and, optionally, drop the table from MySQL. If we want a new index, we make a new MySQL table for that index and run a process to asynchronously populate the index without disrupting our live service.

As a result, we end up having more tables than we had before, but adding and removing indexes is easy. We have heavily optimized the process that populates new indexes (which we call "The Cleaner") so that it fills new indexes rapidly without disrupting the site. We can store new properties and index them in a day's time rather than a week's time, and we don't need to swap MySQL masters and slaves or do any other scary operational work to make it happen.

Details

In MySQL, our entities are stored in a table that looks like this:

CREATE TABLE entities (
added_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
id BINARY(16) NOT NULL,
updated TIMESTAMP NOT NULL,
body MEDIUMBLOB,
UNIQUE KEY (id),
KEY (updated)
) ENGINE=InnoDB;


The added_id column is present because InnoDB stores data rows physically in primary key order. The AUTO_INCREMENT primary key ensures new entities are written sequentially on disk after old entities, which helps for both read and write locality (new entities tend to be read more frequently than old entities since FriendFeed pages are ordered reverse-chronologically). Entity bodies are stored as zlib-compressed, pickled Python dictionaries.

Indexes are stored in separate tables. To create a new index, we create a new table storing the attributes we want to index on all of our database shards. For example, a typical entity in FriendFeed might look like this:

{
"id": "71f0c4d2291844cca2df6f486e96e37c",
"user_id": "f48b0440ca0c4f66991c4d5f6a078eaf",
"feed_id": "f48b0440ca0c4f66991c4d5f6a078eaf",
"title": "We just launched a new backend system for FriendFeed!",
"link": "http://friendfeed.com/e/71f0c4d2-2918-44cc-a2df-6f486e96e37c",
"published": 1235697046,
"updated": 1235697046,
}


We want to index the user_id attribute of these entities so we can render a page of all the entities a given user has posted. Our index table looks like this:

CREATE TABLE index_user_id (
user_id BINARY(16) NOT NULL,
entity_id BINARY(16) NOT NULL UNIQUE,
PRIMARY KEY (user_id, entity_id)
) ENGINE=InnoDB;

Our datastore automatically maintains indexes on your behalf, so to start an instance of our datastore that stores entities like the structure above with the given indexes, you would write (in Python):

user_id_index = friendfeed.datastore.Index(
table="index_user_id", properties=["user_id"], shard_on="user_id")
datastore = friendfeed.datastore.DataStore(
mysql_shards=["127.0.0.1:3306", "127.0.0.1:3307"],
indexes=[user_id_index])

new_entity = {
"id": binascii.a2b_hex("71f0c4d2291844cca2df6f486e96e37c"),
"user_id": binascii.a2b_hex("f48b0440ca0c4f66991c4d5f6a078eaf"),
"feed_id": binascii.a2b_hex("f48b0440ca0c4f66991c4d5f6a078eaf"),
"title": u"We just launched a new backend system for FriendFeed!",
"link": u"http://friendfeed.com/e/71f0c4d2-2918-44cc-a2df-6f486e96e37c",
"published": 1235697046,
"updated": 1235697046,
}
datastore.put(new_entity)
entity = datastore.get(binascii.a2b_hex("71f0c4d2291844cca2df6f486e96e37c"))
entity = user_id_index.get_all(datastore, user_id=binascii.a2b_hex("f48b0440ca0c4f66991c4d5f6a078eaf"))

The Index class above looks for the user_id property in all entities and automatically maintains the index in the index_user_id table. Since our database is sharded, the shard_on argument is used to determine which shard the index gets stored on (in this case, entity["user_id"] % num_shards).

You can query an index using the index instance (see user_id_index.get_all above). The datastore code does the "join" between the index_user_id table and the entities table in Python, by first querying the index_user_id tables on all database shards to get a list of entity IDs and then fetching those entity IDs from the entities table.

To add a new index, e.g., on the link property, we would create a new table:

CREATE TABLE index_link (
link VARCHAR(735) NOT NULL,
entity_id BINARY(16) NOT NULL UNIQUE,
PRIMARY KEY (link, entity_id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

We would change our datastore initialization code to include this new index:

user_id_index = friendfeed.datastore.Index(
table="index_user_id", properties=["user_id"], shard_on="user_id")
link_index = friendfeed.datastore.Index(
table="index_link", properties=["link"], shard_on="link")
datastore = friendfeed.datastore.DataStore(
mysql_shards=["127.0.0.1:3306", "127.0.0.1:3307"],
indexes=[user_id_index, link_index])

And we could populate the index asynchronously (even while serving live traffic) with:

./rundatastorecleaner.py --index=index_link

Consistency and Atomicity

Since our database is sharded, and indexes for an entity can be stored on different shards than the entities themselves, consistency is an issue. What if the process crashes before it has written to all the index tables?

Building a transaction protocol was appealing to the most ambitious of FriendFeed engineers, but we wanted to keep the system as simple as possible. We decided to loosen constraints such that:

* The property bag stored in the main entities table is canonical
* Indexes may not reflect the actual entity values

Consequently, we write a new entity to the database with the following steps:

1. Write the entity to the entities table, using the ACID properties of InnoDB
2. Write the indexes to all of the index tables on all of the shards

When we read from the index tables, we know they may not be accurate (i.e., they may reflect old property values if writing has not finished step 2). To ensure we don't return invalid entities based on the constraints above, we use the index tables to determine which entities to read, but we re-apply the query filters on the entities themselves rather than trusting the integrity of the indexes:

1. Read the entity_id from all of the index tables based on the query
2. Read the entities from the entities table from the given entity IDs
3. Filter (in Python) all of the entities that do not match the query conditions based on the actual property values

To ensure that indexes are not missing perpetually and inconsistencies are eventually fixed, the "Cleaner" process I mentioned above runs continously over the entities table, writing missing indexes and cleaning up old and invalid indexes. It cleans recently updated entities first, so inconsistencies in the indexes get fixed fairly quickly (within a couple of seconds) in practice.

The system has been really easy to work with so far. We have already changed the indexes a couple of times since we deployed the system, and we have started converting some of our biggest MySQL tables to use this new scheme so we can change their structure more liberally going forward.

Wednesday, March 18, 2009

Sun announces new cloud computing services

Sun Microsystems is making a new push into cloud computing today by announcing plans to offer an Internet-based service that will rent out data-storage and processing capacity to developers, startup businesses and others who may not want to invest in building their own data center.

"There are new startups now that are likely to say they're never going to have their own data center; they're going to have it all delivered through the cloud," said Lew Tucker, Sun's chief technology officer for cloud computing.

While the new offering makes use of Sun's open-source software, the company said it also plans to release programming tools that developers can use to create software that will run on "clouds" operated by Sun as well as by competing providers, such as Amazon.

In the future, Tucker said, Sun expects many customers will want to use applications or services that can operate on more than one computing cloud, including so-called public clouds that rent out capacity to a variety of users, and private clouds that serve users within a single company or organization. By distributing the tools known as application programming interfaces, or APIs, Sun hopes to encourage developers to create applications that work on its platform — and to draw customers who see "interoperability" as a convenience.

Big tech companies are jockeying to develop new strategies for cloud computing, in which customers use the Internet to access data or applications that are housed on servers somewhere else. The IDC research firm predicts worldwide spending on cloud computing will be $42 billion in 2012, or about 9 percent of all IT spending.

Computer makers like Hewlett-Packard, IBM and Sun sell the hardware and software used in data centers, and they are positioning themselves to offer both products and expertise for customers who want to build their own cloud platforms.

Computer makers also are expanding their utility computing services, following the success of Amazon, which built data centers for its own online retail business and then started renting out extra capacity through a service called EC2, or Elastic Compute Cloud.

Sun previously had a utility computing service aimed at business and academic customers who needed access to powerful computing capacity for a limited time, Tucker said. That's being replaced by the new service, which will incorporate software to make it more useful for developers and Web startups.

While some bigger businesses have concerns about security and reliability, IDC analyst Jean Bozman said developers and startups are eager cloud users because of the potential cost savings.

Sun's new service will include a Web interface that makes it easy to sign up and pay with a credit card, said Tucker, who declined to say what rates Sun will charge.

Thursday, March 12, 2009

Kaspersky Lab Delivers Heavy Duty Security for Lightweight PCs - MSNBC Wire Services - msnbc.com

It's not the value of the computing device, it's the value of the information on the device that needs to be protected. Ultra Portable PCs have taken the computing market by storm. Priced to sell, they are empowering families to add more computers to the home; and business travelers are carrying these lightweight machines to avoid being bogged down. But just like everywhere else we use computers, the threats online are growing more dangerous each day.

Kaspersky Lab is mindful that the importance of security on such a device can be overlooked. To make the most of these new, more simplified machines; Kaspersky Lab introduces Kaspersky™ Security for Ultra Portables. The product includes all of the features and functionality of our most advanced consumer security suite, retooled, optimized and priced specifically for Ultra Portable PC buyers.

Kaspersky™ Security for Ultra Portables offers a new approach to keeping users safe. In addition to our top-rated detection technology, this new offering is packed with complete Internet security protection including anti-virus, anti-spyware, anti-phishing, anti-spam and anti-hacker technologies. Parental controls and a virtual keyboard are also included to protect an individual's privacy while shopping and banking online.

Tuesday, March 10, 2009

How To Create a Custom Boot Logo for Vista

How To Create a Custom Boot Logo for Vista

Here i am going to tell about how we can create a custom boot logo for vista.

First download and install the freeware Vista Boot Logo Generator . This is helpful for creating the correct logo image type. Just make sure that you 'Run As Administrator.


Now select a bitmap image, and save it on your desktop. Choose two 24 bit .bmp versions of this image. One needs to be 800×600, and the other has to be 1024×768.
Now take ownership of the file as follows. Open an administrator command prompt. To do so, type cmd in the start menu search box, AND HIT Ctrl+Shift+Enter.

Now simply Run this command:

Takeown /f C:\Windows\System32\en-US\winload.exe.mui


Next Run the following command:

Cacls C:\Windows\System32\en-US\winload.exe.mui /G uresname:F


Having done this, copy (overwrite) the created file, in your C:\Windows\System32\en-US\ .

Finally, type msconfig in the start search bar and hit enter.
Under the Boot Tab, check the "No GUI boot" option and click Apply/OK. Reboot.

Best to always create a System Restore point, first !