Showing posts with label General. Show all posts
Showing posts with label General. Show all posts

Wednesday, April 8, 2009

.htaccess - Part 2

Well, first another boring bit! To prevent people from being able to see the contents of your .htaccess file, you need to place the following code in the file:


order allow,deny
deny from all

Be sure to format that just as it is above, with each line on a new line as shown. There is every likelihood that your existing .htaccess file, if you have one, includes those lines already.

Magic Trick No. 1: Redirect to Files or Directories

You have just finished a major overhaul on your site, which unfortunately meant you have renamed many pages that have already been indexed by search engines, and quite possibly linked to or bookmarked by users. You could use a redirect meta tag in the head of the old pages to bring users to the new ones, but some search engines may not follow the redirect and others frown upon it.

.htaccess leaps to the rescue!

Enter this line in your .htaccess file:

Redirect permanent /oldfile.html http://www.domain.com/filename.html

You can repeat that line for each file you need to redirect. Remember to include the directory name if the file is in a directory other than the root directory:

Redirect permanent /olddirectory/oldfile.html http://www.domain.com/newdirectory/newfile.html

If you have just renamed a directory you can use just the directory name:

Redirect permanent /olddirectory http://www.domain.com/newdirectory

(Note: The above commands should each be on a single line, they may be wrapping here but make sure they are on a single line when you copy them into your file.)

This has the added advantage of preventing the increasing problem on the Internet, as people change their sites, of 'link rot'. Now people who have linked to pages on your site will still have functioning links, even if the pages have changed location.

Magic Trick No. 2: Change the Default Directory Page

In most cases the default directory page is index.htm or index.html. Many servers allow a range of pages called index, with a variety of extensions, to be the default page.

Suppose though (for reasons of your own) you wish a page called honeybee.html or margarine.html to be a directory home page?

No problem. Just put the following line in your .htaccess file for that directory:

DirectoryIndex honeybee.html

You can also use this command to specify alternatives. If the first filename listed does not exist the server will look for the next and so on. So you might have:

DirectoryIndex index.html index.htm honeybee.html margarine.html

(Again, the above should all be on a single line)

Magic Trick No. 3: Allow/Prevent Directory Browsing

Most servers are configured so that directory browsing is not allowed, that is if people enter the URL to a directory that does not contain an index file they will not see the contents of the directory but will instead get an error message. If your site is not configured this way you can prevent directory browsing by adding this simple line to your .htaccess file:

IndexIgnore */*

But there may be times when you want to allow browsing, perhaps to allow access to files for downloading or for whatever reason, on a server configured not to allow it. You can override the servers settings with this line:

Options +Indexes

Easy!

Magic Trick No. 4: Allow SSI in .html files

Most servers will only parse files ending in .shtml for Server Side Includes. You may not wish to use this extension, or you may wish to retain the .htm or .html extension used by files prior to your changing the site and using SSI for the first time.

Add the following to your .htaccess file:

AddType text/html .html
AddHandler server-parsed .html
AddHandler server-parsed .htm

You can add both extensions or just one.

Remember though that files which must be parsed by the server before being displayed will load more slowly that standard pages. If you change things as above, the server will parse all .html and .htm pages, even those that do not contain any includes. This can significantly, and unnecessarily, slow down the loading of pages without includes.

Magic Trick No 5: Keep Unwanted Users Out

You can ban users by IP address or even ban an entire range of IP addresses. This is pretty drastic action, but if you don't want them, it can be done very easily.

Add the following lines:

order allow,deny
deny from 123.456.78.90
deny from 123.456.78
deny from .aol.com
allow from all

The second line bans the IP address 123.456.78.90, the third line bans everyone in the range 123.456.78.1 to 123.456.78.999 and so is much more drastic. The fourth line bans everyone from AOL. A somewhat excessive display of power perhaps!

One thing to bear in mind here it that banned users will get a 403 error - "You do not have permission to access this site", which is fine unless you have configured a custom error for this page which in fact appears to let them in. So bear that in mind and if you are banning users for whatever reason make sure your 403 error message is a dead end.

Magic Trick No. 6: Prevent Linking to Your Images

The greatest and most irritating bandwidth leech is having someone link to images on your site. You can foil such thieves very easily with .htaccess. Copy the following into your .htaccess file:

RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?domain.com/.*$ [NC]
RewriteRule \.(gif|jpg)$ - [F]

You don't need to understand any of that! Just change 'domain.com' to the name of your domain.

(Again each command should be on a single line. There are 4 lines above, each starting with 'Rewrite')

If you want to really let them know they have been rumbled why not make an image like the one below (or take this one if you like)

call it stealing.gif, save it to your images file and add the following line after the code above:

RewriteRule \.(gif|jpg)$ http://www.domainname.com/images/stealing.gif [R,L]

(The above command should be on a single line)

Magic Trick No 7: Stop the Email Collectors

While you positively want to encourage robot visitors from the search engines, there are other less benevolent robots you would prefer stayed away. Chief among these are those nasty 'bots that crawl around the web sucking email addresses from web pages and adding them to spam mail lists.

RewriteCond %{HTTP_USER_AGENT} Wget [OR]
RewriteCond %{HTTP_USER_AGENT} CherryPickerSE [OR]
RewriteCond %{HTTP_USER_AGENT} CherryPickerElite [OR]
RewriteCond %{HTTP_USER_AGENT} EmailCollector [OR]
RewriteCond %{HTTP_USER_AGENT} EmailSiphon [OR]
RewriteCond %{HTTP_USER_AGENT} EmailWolf [OR]
RewriteCond %{HTTP_USER_AGENT} ExtractorPro
RewriteRule ^.*$ X.html [L]

Note that at the end of each line for a named robot there appears an '[OR]' - don't forget to include that if you add any others to this list.

This is by no means foolproof. Many of these sniffers do not identify themselves and it is almost impossible to create an exhaustive list of those that do. It's worth a try though if it even keeps some away. The above as as many as I could find.

....and Finally

There is one very important area of the .htaccess file's use that we have not really mentioned and that is its use for user authentication. It is perfectly possible to configure your .htaccess files by hand to control access to directories on your site, but this is rarely necessary.

In most cases your host will provide a method to allow you to much more easily configure the file from your hosting control panel and there are a myriad of Perl scripts that will allow you to set up full user management systems by harnessing the power of .htaccess.

.htaccess - Part 1

If your site is hosted on a Unix or Linux server which runs Apache, you may already be familiar with your .htaccess file.

But that is far from the whole story! In this article we will look at some of the other things that this powerful little file can do. In part two we have 7 Magic Tricks that you can perform with .htaccess, but first let's have a look at the file itself.

What is the .htaccess file?

The .htaccess file is a text file which resides in your main site directory and/or in any subdirectory of your main directory. There can be just one, there can be a separate one in each directory or you may find or create one just in a specific directory.

Any commands in a .htaccess file will affect both the directory it is in and any subdirectories of that directory. Thus if you have just one, in your main directory, it will affect your whole site. If you place one in a subdirectory it will affect all the contents of that directory.

Some Important Points

Windows does not use the .htaccess system. I believe there are ways of doing the things .htaccess does on Windows servers but that is a story for another day and I am afraid I will not be telling it - it just isn't as simple or as elegant as the way Apache manages things in my humble opinion! So unless you are on a Linux/Unix server, this article is no good to you. Sorry.

A warning you will commonly see is that changing the .htaccess file on a server that has FrontPage extensions installed will at best not work and at worst make a complete mess of your extensions. I have to say that this has not been my experience and I have done a fair bit of messing with .htaccess files on FrontPage sites, including using .htaccess for authentication. However do any of these things at your own risk - I cannot be responsible for any harm they might cause.

Your host may not support alteration of the .htaccess file; either contact them first and ask before you make changes or proceed with caution and be sure you have a backup of the original file in case of problems.

Oh! And none of the 'Magic Tricks' described in this article are either magic or tricks. They just seem that way!

Working With Your .htaccess File

Sometimes the first problem is finding it! When you FTP to your site the .htaccess file is generally the first one displayed in a directory if it exists.


Some servers are configured to hide files whose names begin with a period. Your FTP client allows you to chose to display these files. In WS_FTP you can do this by entering -la or -al as indicated in the image on the left and then clicking Refresh. Other clients may use a different method - check the help files in yours.

Editing should be done in a text editor, such as NotePad. You should not edit .htaccess files in editors such as FrontPage. The best thing to do is download a copy of your .htaccess file to your computer, edit it, and upload again, remembering to save a copy of the original in case of errors.

If you do not already have a .htaccess file you can create one in NotePad, it is just a simple text file. However when saving it to the server you may need to rename it from .htaccess.txt to just .htaccess. The two are NOT the same. In fact .htaccess is an extension - to a file with no name!

It is very important when entering commands in your file that each is entered on a new line and that the lines do not wrap. If you find that when you paste any of the commands in this article into your file that the lines are not breaking or are wrapping you will need to correct this.

You must upload and download your .htaccess file in ASCII mode, not BINARY.

So, What about the Magic Tricks? Read on!

Creating and Using a robots.txt File

Creating and Using a robots.txt File
FrontPage Newsletter Article July 2002

In this article we will take a look at how you can create an effective robots.txt file for your site, why you need one and at some tools that can help with the job.

What on Earth is a robots.txt File?

A robots.txt is a file placed on your server to tell the various search engine spiders not to crawl or index certain sections or pages of your site. You can use it to prevent indexing totally, prevent certain areas of your site from being indexes or to issue individual indexing instructions to specific search engines.

The file itself is a simple text file, which can be created in Notepad. It need to be saved to the root directory of your site, that is the directory where your home page or index page is.

Why Do I Need One?

All search engines, or at least all the important ones, now look for a robots.txt file as soon their spiders or bots arrive on your site. So, even if you currently do not need to exclude the spiders from any part of your site, having a robots.txt file is still a good idea, it can act as a sort of invitation into your site.

There are a number of situations where you may wish to exclude spiders from some or all of your site.

  1. You are still building the site, or certain pages, and do not want the unfinished work to appear in search engines
  2. You have information that, while not sensitive enough to bother password protecting, is of no interest to anyone but those it is intended for and you would prefer it did not appear in search engines.
  3. Most people will have some directories they would prefer were not crawled - for example do you really need to have your cgi-bin indexed? Or a directory that simply contains thank you or error pages.
  4. If you are using doorway pages (similar pages, each optimized for an individual search engine) you may wish to ensure that individual robots do not have access to all of them. This is important in order to avoid being penalized for spamming a search engine with a series of overly similar pages.
  5. You would like to exclude some bots or spiders altogether, for example those from search engines you do not want to appear in or those whose chief purpose is collecting email addresses.

The very fact that search engines are looking for them is reason enough to put one on your site. Have you looked at your site statistics recently? If your stats include a section on 'files not found', you are sure to see many entries where search engines spiders looked for, and failed to find, a robots.txt file on your site.

Creating the robots.txt file

There is nothing difficult about creating a basic robots.txt file. It can be created using notepad or whatever is your favorite text editor. Each entry has just two lines:

User-Agent: [Spider or Bot name]
Disallow: [Directory or File Name]

This line can be repeated for each directory or file you want to exclude, or for each spider or bot you want to exclude.

A few examples will make it clearer.



1. Exclude a file from an individual Search Engine

You have a file, privatefile.htm, in a directory called 'private' that you do not wish to be indexed by Google. You know that the spider that Google sends out is called 'Googlebot'. You would add these lines to your robots.txt file:

User-Agent: Googlebot


Disallow: /private/privatefile.htm

2. Exclude a section of your site from all spiders and bots

You are building a new section to your site in a directory called 'newsection' and do not wish it to be indexed before you are finished. In this case you do not need to specify each robot that you wish to exclude, you can simply use a wildcard character, '*', to exclude them all.

User-Agent: *
Disallow: /newsection/

Note that there is a forward slash at the beginning and end of the directory name, indicating that you do not want any files in that directory indexed.

3. Allow all spiders to index everything

Once again you can use the wildcard, '*', to let all spiders know they are welcome. The second, disallow, line you just leave empty, that is your disallow from nowhere.

User-agent: *
Disallow:

4. Allow no spiders to index any part of your site

This requires just a tiny change from the command above - be careful!

User-agent: *
Disallow: /

If you use this command while building your site, don't forget to remove it once your site is live!

Getting More Complicated

If you have a more complex set of requirements you are going to need a robots.txt file with a number of different commands. You need to be quite careful creating such a file, you do not want to accidentally disallow access to spiders or to areas you really want indexed.

Let's take quite a complex scenario. You want most spiders to index most of your site, with the following exceptions:

  1. You want none of the files in your cgi-bin indexed at all, nor do you want any of the FP specific folders indexed - eg _private, _themes, _vti_cnf and so on.
  2. You want to exclude your entire site from a single search engine - let's say Alta Vista.
  3. You do not want any of your images to appear in the Google Image Search index.
  4. You want to present a different version of a particular page to Lycos and Google.

  1. (Caution here, there are a lot of question marks over the use of 'doorway pages' in this fashion. This is not the place for a discussion of them but if you are using this technique you should do some research on it first.)

Let's take this one in stages!

1. First you would ban all search engines from the directories you do not want indexed at all:

User-agent: *
Disallow: /cgi-bin/
Disallow: /_borders/
Disallow: /_derived/
Disallow: /_fpclass/
Disallow: /_overlay/
Disallow: /_private/
Disallow: /_themes/
Disallow: /_vti_bin/
Disallow: /_vti_cnf/
Disallow: /_vti_log/
Disallow: /_vti_map/
Disallow: /_vti_pvt/
Disallow: /_vti_txt/

It is not necessary to create a new command for each directory, it is quite acceptable to just list them as above.

2. The next thing we want to do is to prevent Alta Vista from getting in there at all. The Altavista bot is called Scooter.

User-Agent: Scooter
Disallow: /

This entry can be thought of as an amendment to the first entry, which allowed all bots in everywhere except the defined files. We are now saying we mean all bot can index the whole site apart from the directories specified in 1 above, except Scooter which can index nothing.

3. Now you want to keep Google away from those images. Google grabs these images with a sperate bot from the one that indexes pages generally, called Googlebot-Image. You have a couple of choices here:

User-Agent: Googlebot-Image
Disallow: /images/

That will work if you are very organized and keep all your images strictly in the images folder.

User-Agent: Googlebot-Image
Disallow: /

This one will prevent the Google image bot from indexing any of your images, no matter where they are in your site.

4. Finally, you have two pages called content1.html and content2.html, which are optimized for Google and Lycos respectively. So, you want to hide content1.html from Lycos (The Lycos spider is called T-Rex):

User-Agent: T-Rex
Disallow: /content1.html

and content2.html from Google.

User-Agent: Googlebot
Disallow: /content2.html

Summary and Links

Writing a robots.txt file is, as you have seen, a relatively simple matter. However it is important to bear in mind that it is not a security method. It may stop your specified pages from appearing in search engines, but it will not make them unavailable. There are many hundreds of bots and spiders crawling the Internet now and while most will respect your robot.txt file, some will not and there are even some designed specifically to visit the very pages you are specifying as being out of bounds.

For those who would like to know more here are some resources you may find useful.

robots.txt File Generators

I think it may be easier to write your own file than use these but for those who would like to have their robots file generated automatically there are a couple of free online tools that will do the trick for you.

Sunday, March 29, 2009

Tips to recover scratched CD/DVDs

Don't you feel like crying every time you add another disc to your pile of scratched discs. Trashing that disc which contained your favorite songs, pics, files, games or videos is not easy.

Read-on, if you find yourself wishing for a miracle every time your favorite CD is scratched:

Home Remedy :
Here's an easy home remedy, which might give you the desired results. Rub a small amount of toothpaste on the scratch and polish the CD with a soft cloth and any petroleum-based polishing solution (like clear shoe polish). Squirt a drop of Brasso and wipe it with a clean cloth.

Saturday, March 28, 2009

Top 5 Online Scams

Reports are that nearly 10 million Americans were the victims of online fraud during 2004. Thousands of con artists are constantly hunting for unsuspecting people online. Below, we have 5 of the most common scams that are currently circulating the Internet:
  1. Fraud:
    FBI's Internet Crime Complaint Center reports that Online Auction Fraud accounted for three quarters of 2004's complaints. Here is what usually happens in the Auction Fraud scam:
    The product that you've purchased is never what you actually bided on. It either is a cheap imitation or it just won't match at all. The descriptions of the product you are bidding on will usually be vague or completely fake. One buyer reports he purchased a portable DVD player for $100, but what he got instead was a Web address for a site where he could buy a player for a $200 discount.
  2. Phishing Scams:In this scam, you receive a very real looking e-mail that looks like it came from your bank, it usually warms you up by trying to warn you about identity theft and asking that you log in and verify your account information. The message says that if you don't take action immediately, your account will be terminated.
    When you click the supplied link, you will be taken to a site that is a replication of your banks site and has text boxes to enter in the required account information. Once you enter this information and click send, the scammers now have all the information that they need to steal your identity and start opening new credit accounts or what-ever else they would like to do with this information. In some instances, really smart phishers direct you to the genuine Web site, then pop up a window over the site that captures your personal information.
  3. Nigerian 419 Letter:
    Here, you receive an e-mail, usually written in ALL capital letters, that starts out something like this:
    "DEAR SIR/MADAM: I REPRESENT THE RECENTLY DEPOSED MINISTER OF AGRICULTURE FOR NODAMBIZIA, WHO HAS EMBEZZLED 30 MILLION DOLLARS FROM HIS STARVING COUNTRYMEN AND NOW NEEDS TO GET IT OUT OF THE COUNTRY..."
    The letter states that the scammers are seeking an accomplice who will transfer the funds into their account for a cut of the total--usually around 30 percent. You'll be asked to travel overseas to meet with the scammers and complete the necessary paperwork. But before the transaction can be finalized, you must pay thousands of dollars in "taxes," "attorney costs," "bribes," or other advance fees.
    Well, of course, there is no minister and no money. Victims who travel overseas could find themselves physically threatened and not allowed to leave until they pay a lot of money. Several victims have been reported killed or gone missing while chasing a 419 scheme.
    (FYI, "419" is named for the section of Nigeria 's penal code that the scam violates.)
  4. Postal Forwarding or Reshipping Scam:
    Remember the "work-at-home" envelope-stuffing scam that promised steady income for minimal labor, and a minimal fee to get started? Well, the loss from that scam is small compared to this clever postal forwarding/reshipping scam. This scam lures job seekers with an online ad looking for a "correspondence manager" promising big bucks for little or no work. An offshore corporation without a U.S. address or bank account needs someone to have goods sent to their address and reship them overseas. You may also be asked to accept wire transfers into your bank account, and then transfer the money to your new boss's account. Your reward is a percentage of the goods or amount transferred.
    What you are never told is that the products are purchased online using stolen credit cards and shipped to your address. You then reship them to the scammers who, in turn, fence them overseas. So, in reality, you are transferring stolen funds from one account to another. This dangerous situation usually ends with your bank account being cleaned out, or worse, a warrant for your arrest.
  5. "Congratulations, You've Won an Xbox":
    In this last one, you get an e-mail telling you that you are a big winner! It will tell you that you've won such products as; an Xbox or an IPod. All you need to do is visit a Web Site and provide your debit card number and PIN to cover "shipping and handling" costs.
    The item will never arrive. A few months later, unknown charges appear on your bank account.

The Year 2038 Problem

What is it?

Starting at GMT 03:14:07, Tuesday, January 19, 2038, It is expected to see lots of systems around the world breaking magnificently: satellites falling out of orbit, massive power outages (like the 2003 North American blackout), hospital life support system failures, phone system interruptions, banking errors, etc. One second after this critical second, many of these systems will have wildly inaccurate date settings, producing all kinds of unpredictable consequences. In short, many of the dire predictions for the year 2000 are much more likely to actually occur in the year 2038! Consider the year 2000 just a dry run. In case you think we can sit on this issue for another 30 years before addressing it, consider that reports of temporal echoes of the 2038 problem are already starting to appear in future date calculations for mortgages and vital statistics!

In the first month of the year 2038 C.E. many computers will encounter a date-related bug in their operating systems and/or in the applications they run. This can result in incorrect and wildly inaccurate dates being reported by the operating system and/or applications. The effect of this bug is hard to predict, because many applications are not prepared for the resulting "skip" in reported time - anywhere from 1901 to a "broken record" repeat of the reported time at the second the bug occurs. Also, may make some small adjustment to the actual time the bug expresses itself. This bug to cause serious problems on many platforms, especially Unix and Unix-like platforms, because these systems will "run out of time".

What causes it?

Time_t is a data type used by C and C++ programs to represent dates and times internally. (Windows programmers out there might also recognize it as the basis for the CTime and CTimeSpan classes in MFC.) time_t is actually just an integer, a whole number, that counts the number of seconds since January 1, 1970 at 12:00 AM Greenwich Mean Time. A time_t value of 0 would be 12:00:00 AM (exactly midnight) 1-Jan-1970, a time_t value of 1 would be 12:00:01 AM (one second after midnight) 1-Jan-1970, etc..

some example times and their exact time_t representations:

Date & time

time_t representation

1-Jan-1970, 12:00:00 AM GMT

0

1-Jan-1970, 12:01:00 AM GMT

60

1-Jan-1970, 01:00:00 AM GMT

3 600

2-Jan-1970, 12:00:00 AM GMT

86 400

1-Jan-1971, 12:00:00 AM GMT

31 536 000

1-Jan-1972, 12:00:00 AM GMT

63 072 000

1-Jan-2038, 12:00:00 AM GMT

2 145 916 800

19-Jan-2038, 03:14:07 AM GMT

2 147 483 647

By the year 2038, the time_t representation for the current time will be over 2 140 000 000. And that's the problem. A modern 32-bit computer stores a "signed integer" data type, such as time_t, in 32 bits. The first of these bits is used for the positive/negative sign of the integer, while the remaining 31 bits are used to store the number itself. The highest number these 31 data bits can store works out to exactly 2 147 483 647. A time_t value of this exact number, 2 147 483 647, represents January 19, 2038, at 7 seconds past 3:14 AM Greenwich Mean Time. So, at 3:14:07 AM GMT on that fateful day, every time_t used in a 32-bit C or C++ program will reach its upper limit.

One second later, on 19-January-2038 at 3:14:08 AM GMT, disaster strikes.

When a signed integer reaches its maximum value and then gets incremented, it wraps around to its lowest possible negative value. This means a 32-bit signed integer, such as a time_t, set to its maximum value of 2 147 483 647 and then incremented by 1, will become -2 147 483 648. Note that "-" sign at the beginning of this large number. A time_t value of -2 147 483 648 would represent December 13, 1901 at 8:45:52 PM GMT.

So, if all goes normally, 19-January-2038 will suddenly become 13-December-1901 in every time_t across the globe, and every date calculation based on this figure will go haywire. And it gets worse. Most of the support functions that use the time_t data type cannot handle negative time_t values at all. They simply fail and return an error code.

A quick check with the following Perl script may help determine if your computers will have problems (this requires Perl to be installed on your system, of course):

#!/usr/bin/perl
# Use POSIX (Portable Operating System Interface)

use POSIX;

# Set the Time Zone to GMT (Greenwich Mean Time) for date calculations.

$ENV{'TZ'} = "GMT";

# Count up in seconds of Epoch time just before and after the critical event.

for ($clock = 2147483641; $clock < 2147483651; $clock++)

{

print ctime($clock);

}


For example, the output of this script on Debian GNU/Linux (kernel 2.4.22) (An affected system) will be

# ./2038.pl

Tue Jan 19 03:14:01 2038

Tue Jan 19 03:14:02 2038

Tue Jan 19 03:14:03 2038

Tue Jan 19 03:14:04 2038

Tue Jan 19 03:14:05 2038
Tue Jan 19 03:14:06 2038
Tue Jan 19 03:14:07 2038
Fri Dec 13 20:45:52 1901
Fri Dec 13 20:45:52 1901
Fri Dec 13 20:45:52 1901

Solution

"The best way to predict the future is to engineer it." Consider testing your mission-critical code well ahead of time on a non-production test platform set just before the critical date. For more general applications, just using large types for storing dates will do the trick in most cases. For example, in GNU C, 64-bits (a "long " type) is sufficient to keep the time from rolling over for literally geological eons This just means any executables the operating systems runs will always get the correct time reported to them when queried in the correct manner. It doesn't stop the executables you may still want to be worried about

Well-written programs can simply be recompiled with a new version of the library that uses, for example, 8-byte values for the storage format. This is possible because the library encapsulates the whole time activity with its own time types and functions (unlike most mainframe programs, which did not standardize their date formats or calculations). So the Year 2038 problem should not be nearly as hard to fix as the Y2K problem was.

Admittedly, some don't feel that this impending disaster will strike too many people. They reason that, by the time 2038 rolls around, most programs will be running on 64-bit or even 128-bit computers. In a 64-bit program, a time_t could represent any date and time in the future out to 292 000 000 000 A.D., which is about 20 times the currently estimated age of the universe. The problem with this kind of optimism is the same root problem behind most of the Year 2000 concerns that plagued the software industry in previous years: Legacy Code. Even if every PC in the year 2038 has a 64-bit CPU, there will be a lot of older 32-bit programs running on them

The greatest danger with the Year 2038 Problem is its invisibility. The more-famous Year 2000 is a big, round number; it only takes a few seconds of thought, even for a computer-illiterate person, to imagine what might happen when 1999 turns into 2000. But January 19, 2038 is not nearly as obvious. Software companies will probably not think of trying out a Year 2038 scenario before doomsday strikes. Of course, there will be some warning ahead of time. Scheduling software, billing programs, personal reminder calendars, and other such pieces of code that set dates in the near future will fail as soon as one of their target dates exceeds 19-Jan-2038, assuming a time_t is used to store them.

What Is Copyrights?

Copyright (symbol ©) is used to give notice that a work is covered by copyright.

Year of copyright:
The year(s) of copyright are listed after the © symbol. If the work has been modified (i.e., a new edition) and recopy-righted, there will be more than one year listed.

How long Copyrights last?
Copyright subsists for a variety of lengths in different jurisdictions, with different categories of works and the length it subsists for also depends on whether a work is published or unpublished. In most of the world the default length of copyright for many works is either life of the author plus 50 years, or life of the author plus 70 years. Copyright in general always expires at the end of the year concerned, rather than on the exact date of the death of the author.

General Computer Cleaning Tips

  1. Before you clean a computer or any component, be sure to turn the power off and unplug it from the outlet.
  2. Use caution when cleaning inside the computers case not to disturb any plugs or jumpers. If you do, this will make for difficult troubleshooting when you turn the computer back on.
  3. Avoid spraying any type of liquid directly on to a computer component. Spray the liquid on to a cloth, then apply it to the computer component.
  4. Never use a house vacuum cleaner to clean the dust out of your computer case. House vacuums generate a lot of static electricity that can damage your systems components. There are portable battery operated vacuums available that are designed for use in a computer environment. It is fine to use your house vacuum to suck up the dirt and dust around your computer or even to suck the dust out of your keyboard.
  5. Make sure that you never get any component inside your computer wet. It is not advisable to use any cleaning liquid inside the case. You can use some canned compressed air to remove any dust from the case and case fans. Be sure to take your computer to a different location when blowing the dust out.
  6. Be sure to visit your computer manufactures web site to find out what cleaning solvents are recommended for cleaning your computer. I recommend just using warm water for almost any computer cleaning task. But if you need a stronger cleaning solution, be sure that it is highly diluted.

Use Google As Calculator & Spell Check

For obvious reasons I use Google extensively for all of my search. I use two more features of google much. One directly and another indirectly.

1. For all my Calculations such as 4*5*6+5-3
2. For finding the spelling of a word, I google. In fact as I was typing this post I made sure that the word and extensively and obsessed aren't spelt as extensievely and obcessed.

Release Numbering Process

Release Numbering Process: It won’t be the same for all companies... But generally it will be so... :)

Each product will have series of product releases. A product release will have a generic four integer identifier (product release number) A, B, C, D where
A, B, C, D are integers
A corresponds to Major / Marketing release version. (significant change in product.)
B corresponds to Minor release version. (major functional changes)
C corresponds to Servicepack release version. (service pack identifier with small bug fixes)
D <= 9 corresponds to Special release version (exceptional case where one or two files will be updated after offcial product release)

Based on the above definition the three aspects of product release numbering conventions to be followed are,
Major / Marketing Release Version - A
Engineering Release Version - A.B.C
Build Number - ABC[0-9]

Examples

Eg-1) If release version of a product is 4.0.0 then by default the Build Number at about dialog will be Build 4000
If release version of a product is 4.0.0 and if the release contains one special update then the Build Number at about dialog will be Build 4001

If release version of a product is 4.1.0 then by default the Build Number at about dialog will be Build 4100
If release version of a product is 4.1.2 then by default the Build Number at about dialog will be Build 4120

Eg-2) If build number of a product is 5342 then
5 - Major / Marketing release version.
3 - Minor version
4 - Service Pack Version
2 - Special updates. There should not be more than 9 special updates.

Eg-3) Product Name>_5.0.0 New general major release.

_5.1.0 Minor release with some major functional changes in the product.

_5.0.1 Service pack release with some bug fixes over general release.

Is Your Computer Safe & Secure?

Is your computer safe and secure?

The programs we're using are getting more complex while software update cycles are getting shorter. Add the fact that we're increasingly depending on computers connected to the Internet to this mix and what you have is a recipe for computer bugs and security holes.

The good news is that these same factors can also be the best way to handle computer and the Internet related security issues and bugs. But only if you take the necessary steps to stay informed.

Things you can do to safeguard your computer

  • STAY ALERT!

Sounds simple? But do you really know if you have the latest patch for your browser, the software you use everyday or even the operating system that you run all your programs on?

Don't expect to hear about security issues and other software bugs in the traditional media such as the TV and news papers. Even if you subscribe to a technical journal, you may not get the news in time.

    • Use email notification services : Subscribe to email notification services related to the software you use. Don't forget to include your operating system, web browser and any other software that will connect to the Internet in this list. Almost all of these notification services are free and subscription information is usually found on the software publisher's web site or the software registration card.

    • Periodically check related web sites : If an email notification service is not available, add a task to your calendar to check your software publishers' web sites at least every month, if not every week. You may have to search their news archives to find any security bulletins.

    • Search newsgroups : Some software publishers may not provide timely information about their software glitches openly. In such instances, newsgroups dedicated to open discussions may help you to find related messages posted by other users. Be aware that the quality and the credibility of information gathered from newsgroups maybe lower than information retrieved using above two methods. Searching, rather than browsing messages one by one, is recommended when it comes to newsgroup postings. For example, search for:
      "product name" AND bug OR fix

  • TAKE ACTION

Once you become aware of a bug or a security issue, carefully read the documentation for it and take the recommended action.

For example, if applying a software patch is recommended by the software publisher, do so as soon as possible. Don't delay taking action until the end of the month. Some software patches must be applied in a particular order. Applying fixes as they become available could make it easier to keep this order.

  • KEEP DEFECTIVE SOFTWARE OUT OF REACH

After applying patches to your current software installation, be sure to remove defective software from circulation and to document the actions you took for future reference.

For example, if you receive a replacement CD or a floppy with a fix, remove obsolete disks from the circulation to avoid future confusions.

If the fix was provided in a form of a patch (if you still need the original installation disks in case you have to reinstall the software), be sure to make a note of the patches you applied for future reference. You may want to keep a separate notepad for this purpose or simply label or mark the disks as a reminder to yourself.

If you're responsible for maintaining more than just your personal computer, administrating a network for example, you should take extra steps such as examining server log files, renewing passwords and evaluating the effectiveness of your organization's security measures.

Following is a list of resources useful to all Windows users and to most other Internet users to stay up-to-date with security and other computer software defects related news:

· Deja News

Online tool for searching, reading and posting Usenet newsgroups.

· Microsoft Security Advisor Program

Security, Microsoft Security Advisor, Internet Security, NT Security. News, advisories, how to improve security.

· Windows Update

Get Windows 98, NT 5 and other software updates online.

· Internet Explorer Security Area

The place to get Internet Explorer related security updates.

· Microsoft Security Notification Service

The Microsoft Security Notification Service is a free e-mail notification service that Microsoft uses to send information to subscribers about the security of Microsoft products.

· CERT* Coordination Center

The CERT* Coordination Center studies Internet security vulnerabilities, provides incident response services to sites that have been the victims of attack, publishes a variety of security alerts, researches security and survivability in wide-area-networked computing, and develops information to help you improve security at your site.

· NTBugtraq Home Page

NTBugtraq is a mailing list for the discussion of security exploits and security bugs in Windows NT and its related applications.

· World Wide Web Security FAQ

W3C's World Wide Web Security FAQ for webmasters.

· Netscape Security Solutions

Security issues related to Netscape products.

· Windows NT fixes FTP directory

· Computer Incident Advisory Capability

CIAC provides on-call technical assistance and information to Department of Energy (DOE) sites faced with computer security incidents. The other services CIAC provides are: awareness, training, and education; trend, threat, vulnerability data collection and analysis; and technology watch.

· Computer Security Technology Center, The

Located at the Lawrence Livermore National Laboratory, provides solutions to U.S. Government agencies facing today's security challenges in information technology.