Google Groups
Subscribe to Software Outsourcing [ Hire Dedicated Group ]
Email:
Visit this group

Wednesday, August 15, 2007

How to Select a Search Engine Optimization Firm

There are a lot of very good search engine optimization firms to choose from, and there are a lot (too many) of firms that say they do search engine optimization but end up being a waste of time and money. I hope today’s article will help you separate the wheat from the chaff.

When you begin your vendor selection process, you should start by defining your needs and goals. So many people enter into the selection process with no idea of what their goals may be other than, “We want to rank on page one of Google.” Here are some other issues to consider.
What Are Your goals?

Are you interested in branding, or more likely, are you interested in how search engine optimization can help you grow your business? To me, growing your business at a good ROI should be the ultimate goal. Just like you would measure any form of marketing, you should determine what the ROI should be from your search engine optimization efforts.

If you are not an e-commerce Web site, but you have a lead form on your site, you can determine a value to place on each lead that comes through your search engine optimization efforts and measure against that. If you get a majority of your leads/inquiries through someone calling, perhaps your goal is an increase in relevant traffic to the site. In that case, you can determine the value of each “click” from your search engine optimization efforts and then tally up the total clicks per month, measuring this against the cost of the program.

Saying this, I should remind everyone that proper search engine optimization takes time, so it’s best to evaluate year-over-year increases. This valuation should occur after the recommendations from your provider have been implemented for a minimum of two to three months. It has been my experience that measurable increases from your search engine optimization efforts will occur no sooner than this timetable, but each Web site is unique.
Unique Challenges

No two Web sites are the same. Every search engine optimization project will have its unique set of challenges/needs. If there’s anything that irritates me, it is search engine optimization firms that have “packages.” Some Web sites are new to launch and will require a lot more work to get the ball rolling (link building, among other things). Some Web sites are high-quality but may lack visible text, so they may require copywriting assistance. And then there are some that have many technical challenges, which may require a talented Web development team to sort through the issues.

Another thing to consider here is your available human resources pool, or lack thereof. Do you have a Web developer who can take the recommendations provided by a search engine optimization company and can accurately implement these recommendations? Would your company allow an outside vendor access to your Web site to make changes that may be necessary to assist in the search engine optimization efforts?

The catch 22 of all of this is a smaller company will probably require the most amount of help. A smaller company would probably not have a copywriter on staff, a development team, a public relations department, or any other resources to assist in these efforts. That means you would depend on your search engine optimization provider to bring all of its resources to the project. The more resources needed, the more amount of time the provider would be involved in. The more amount of time needed, the more money you can expect to spend.

Once you have managed to match your needs/goals with a list of providers you would like to contact, it’s now time to ask some very important questions.
Experience

How many search engine programs has your provider managed? I’m not saying that a shear number of projects is the key in the selection process, but years of experience can be very beneficial. Knowing what works in the long term is key to your achieving results that last and are not overly dependant on algorithm updates.

A good rule is to avoid firms that work with Web sites in the porn industry or gambling. Chances are, these firms are accustomed to risky behavior and could get you associated with some bad neighborhoods. It’s like your parents used to say, “Who you hang out with says a lot about who you are.”

If they guarantee top-ten rankings, run. Don’t walk. Run. There is no such thing as guaranteed top-ten rankings in organic search engine optimization. We (search engine optimization companies) do not own the search engines. We are similar to a public relations firm in that we know how to best position you with the search engines, but the search engines – ultimately – will rank you based upon their criteria. A good search engine optimization firm understands the criteria and can, over time, help you to enhance your presence in the major search engines.

If they tell you they are going to submit your Web site to hundreds or thousands of search engines, you might want to consider another provider. Submission to the major crawler-based search engines, other than possible XML feeds, is not necessary. Quality link building (internally and externally) will get your site well indexed.

Can they show you live examples of their work and the results? Any firm worth its salt will be more than happy to point you to live examples of rankings and testimonials from clients. They should have a deep pool of references for you to call and speak with.
Transparency

I think this is one aspect of the selection process that is often overlooked. Many search engine optimization companies are not willing to divulge what it is that they do. Again, there are many very good search engine optimization firms that will want you to be educated in the process. The more you understand search engine optimization, the easier it will be to work with you.

After all, you will know why certain recommendations are being made, and you will become a champion in the efforts to get recommendations implemented and in pushing the program forward. There is little more frustrating than working with a large company that cannot seem to get the IT team to buy off on the recommendations or puts them on the back burner for weeks. So, the more you understand, the more you will be able to help the search engine optimization company do its job, and the more successful you will all be.

Hope this helps! If there are any topics you would like me to cover in future articles, please don’t hesitate to contact me!

Source by searchenginewatch.com

Monday, August 13, 2007

Offshore Software Development India (OSDI) venturing abroad.

After last year's success in Shipping Portal with shipping-exchange.com and Blog site of blogfreehere.com the IT Director of Offshore Software Development India (OSDI) visited Scotland, England and Wales recently for new projects. Offering a wide range of skills in IT Services. Mainly focusing in Business Process Outsourcing (BPO), Software Development, IT Consultancy, Web Designing / Web Development, Offshore Outsourcing, Multimedia, Customized Software Applications and Search Engine Optimization (SEO). Returning back last weekend back to Ahmedabad from UK the IT Director said to the staff we have new challenges to meet and much to deliver to the UK based customers.

Technology is a wide arena, like our outer space in the galaxy. Enormous potential lies and just companies like us, Offshore Software Development India (OSDI), and clients like you can explore its zenith. Since long the days of Stand-alone PC have gone and the WWW or World Wide Web or Internet as we all know, has conquered every PC, Server, Mobile and Laptop we use. Plenty to explore from Web Pages, Online News which use RSS feeds, Pod cast and amazing YouTube.com has changed everything as far as online video is concerned. Imagine your website being one of millions website that sits on the .NET waiting to be explored. The hits on your website count. They generate business and inquiries. They would give you business and turnover which you have been waiting for. Offshore Software Development India (OSDI) can help you in your Website to succeed in this highly competitive market. Search Engine Optimisation is the way ahead.

Our Search Engine Optimisation service is the perfect fusion of linguistic skills, technical know-how and market sector research, blessed with a keen eye to customer needs. A business is only successful if the ROI is good. Hence we always say - "Deliver with Difference to Succeed" We are confident handling major technological brands. We understand what your brand means to you. We can work successfully with complex sites and diverse needs. We try to go that extra mile just for you.

We provide value for your money. Our Search Engine Optimisation service includes Keyword analysis, Solutions for non-search engine compatible sites, Competitor analysis, On-page optimisation, Deep site optimisation, Valuable link building, Brand protective approach, Solutions for catalogue sites, Fast, manual search engine submission, Optimisation for a re-brand, Position reporting and Page ranking improvement. We try to compete with the International Market of the Internet.

We are humble professionals so please ignore us if we are always on about your business needs. Our core expertise lie in few more areas of the IT sector like Web Development, Web Designing, Outsourcing Offshore Software Application Development, .Net Development using Microsoft, Shopping Cart/ E-store Development, Customer Relations Management Portal Development, Ecommerce Application Development, Auction Websites and Portals, Commerce Server based solutions, Content Management Systems (CMS) on the Web. Open Source is equally competitive solution we believe in Open Source Development. Services offered in this area could meet your needs for Web Applications / Application Re-designing, PHP Development (exploring the open source to enhance your business in turn lowering your maintenance cost), Joomla and Drupal based solutions which explore the Content Management Framework (CMF) Based Web solutions.

http://www.offshoresoftwaredevelopmentindia.com

We just like to share and update the world on our actions hence if you feel like trying us out why not contact us on info@offshoresoftwaredevelopmentindia.com or just call us on +91-79-65457841. We like to talk about your business needs and it is free

Changes in PHP 5.2.3

  • Security Fixes
    • Fixed an integer overflow inside chunk_split() (by Gerhard Wagner, CVE-2007-2872)
    • Fixed possible infinite loop in imagecreatefrompng. (by Xavier Roche, CVE-2007-2756)
    • Fixed ext/filter Email Validation Vulnerability (MOPB-45 by Stefan Esser, CVE-2007-1900)
    • Fixed bug #41492 (open_basedir/safe_mode bypass inside realpath()) (by bugs dot php dot net at chsc dot dk)
    • Improved fix for CVE-2007-1887 to work with non-bundled sqlite2 lib.
    • Added mysql_set_charset() to allow runtime altering of connection encoding.
  • Changed CGI install target to php-cgi and 'make install' to install CLI when CGI is selected. (Jani)
  • Changed JSON maximum nesting depth from 20 to 128. (Rasmus)
  • Improved compilation of heredocs and interpolated strings. (Matt, Dmitry)
  • Optimized out a couple of per-request syscalls. (Rasmus)
  • Optimized digest generation in md5() and sha1() functions. (Ilia)
  • Upgraded bundled SQLite 3 to version 3.3.17. (Ilia)
  • Addded "max_input_nesting_level" php.ini option to limit nesting level of input variables. Fix for MOPB-03-2007. (Stas)
  • Added a 4th parameter flag to htmlspecialchars() and htmlentities() that makes the function not encode existing html entities. (Ilia)
  • Added PDO::FETCH_KEY_PAIR mode that will fetch a 2 column result set into an associated array. (Ilia)
  • Added CURLOPT_TIMEOUT_MS and CURLOPT_CONNECTTIMEOUT_MS cURL constants. (Sara)
  • Added --ini switch to CLI that prints out configuration file names. (Marcus)
  • Implemented FR Fixed bug #41416 (getColumnMeta() should also return table name). (Tony)
  • Fixed filetype() and linkinfo() processing of symlinks on ZTS systems. (Oliver Block, Tony, Dmitry)
  • Fixed SOAP extension's handler() to work even when "always_populate_raw_post_data" is off. (Ilia)
  • Fixed altering $this via argument named "this". (Dmitry)
  • Fixed PHP CLI usage of php.ini from the binary location. (Hannes)
  • Fixed segfault in strripos(). (Tony, Joxean Koret)
  • Fixed gd build when used with freetype 1.x (Pierre, Tony)
  • Fixed bug #41525 (ReflectionParameter::getPosition() not available). (Marcus)
  • Fixed bug #41511 (Compile failure under IRIX 6.5.30 building md5.c). (Jani)
  • Fixed bug #41504 (json_decode() incorrectly decodes JSON arrays with empty string keys). (Ilia)
  • Fixed bug #41477 (no arginfo about SoapClient::__soapCall()). (Ilia)
  • Fixed bug #41455 (ext/dba/config.m4 pollutes global $LIBS and $LDFLAGS). (mmarek at suse dot cz, Tony)
  • Fixed bug #41442 (imagegd2() under output control). (Tony)
  • Fixed bug #41430 (Fatal error with negative values of maxlen parameter of file_get_contents()). (Tony)
  • Fixed bug #41423 (PHP assumes wrongly that certain ciphers are enabled in OpenSSL). (Pierre)
  • Fixed bug #41421 (Uncaught exception from a stream wrapper segfaults). (Tony, Dmitry)
  • Fixed bug #41403 (json_decode cannot decode floats if localeconv decimal_point is not '.'). (Tony)
  • Fixed bug #41401 (wrong unary operator precedence). (Stas)
  • Fixed bug #41394 (dbase_create creates file with corrupted header). (Tony)
  • Fixed bug #41390 (Clarify error message with invalid protocol scheme). (Scott)
  • Fixed bug #41378 (fastcgi protocol lacks support for Reason-Phrase in "Status:" header). (anight at eyelinkmedia dot com, Dmitry)
  • Fixed bug #41374 (whole text concats values of wrong nodes). (Rob)
  • Fixed bug #41358 (configure cannot determine SSL lib with libcurl >= 7.16.2). (Mike)
  • Fixed bug #41353 (crash in openssl_pkcs12_read() on invalid input). (Ilia)
  • Fixed bug #41351 (Invalid opcode with foreach ($a[] as $b)). (Dmitry, Tony)
  • Fixed bug #41347 (checkdnsrr() segfaults on empty hostname). (Scott)
  • Fixed bug #41337 (WSDL parsing doesn't ignore non soap bindings). (Dmitry)
  • Fixed bug #41326 (Writing empty tags with Xmlwriter::WriteElement[ns]) (Pierre)
  • Fixed bug #41321 (downgrade read errors in getimagesize() to E_NOTICE). (Ilia)
  • Fixed bug #41304 (compress.zlib temp files left). (Dmitry)
  • Fixed bug #41293 (Fixed creation of HTTP_RAW_POST_DATA when there is no default post handler). (Ilia)
  • Fixed bug #41291 (FastCGI does not set SO_REUSEADDR). (fmajid at kefta dot com, Dmitry)
  • Fixed bug #41287 (Namespace functions don't allow xmlns definition to be optional). (Rob)
  • Fixed bug #41283 (Bug with deserializing array key that are doubles or floats in wddx). (Ilia)
  • Fixed bug #41257 (lookupNamespaceURI does not work as expected). (Rob)
  • Fixed bug #41236 (Regression in timeout handling of non-blocking SSL connections during reads and writes). (Ilia)
  • Fixed bug #41134 (zend_ts_hash_clean not thread-safe). (marco dot cova at gmail dot com, Tony)
  • Fixed bug #41097 (ext/soap returning associative array as indexed without using WSDL). (Dmitry)
  • Fixed bug #41004 (minOccurs="0" and null class member variable). (Dmitry)
  • Fixed bug #39542 (Behavior of require/include different to <>
Source by pt.php.net

Saturday, August 11, 2007

Link Page Have A PR Zero - Five reason

Article Author - Krunal - Creeper SEO Member

there are lot of sites that request a link exchange from me may have a PR 5 or 6 on their home page and when click through to the links page there is a PR 0. we can’t link exchange with that site.

There are many things to consider when exchanging links. The above question asks whether you should consider linking to a PR 5 site when the links page is a PR 0.

Five reason for links page has not pr.
1 The links page is new and has not had a PR assigned to it in the Google toolbar (that does not mean it has no PR, just that the toolbar has not been updated to reflect its PR).

This case is easy to spot. Look at the URL of the links page. Then go to the homepage and View the source of the homepage in a text editor (from the View Menu in Explorer, select Source).

Do a search of the source for the links page filename. e.g. if the links page is called links.html, search the source code for links.html.

If you find a link on the homepage to the links page, chances are the links page is new and has not had time to be assigned a PR in the toolbar yet.

check that link to the links page in the source again. Make sure that there is no dynamic linking going on. While it is not always easy to spot, the introduction of the “nofollow” tag in recent months, has meant that many non-techie webmasters have been able to create dynamic links, quickly, easily, and without much technical knowledge. If you see the word “nofollow” in the link HTML pointing to the links page, then this webmaster is not passing PR to the links page. In fact, worse than that is the fact that the search engines wont even find and index the links page.

2 Links page is not being linked to, or is linked to using a dynamic link.

If you do not find a link to the links page on the homepage of the site, or the link uses one of the forms of dynamic linking, then I would not recommend you link to that site. The links page will get no PR, and wont even be found by the search engines, so you get no benefit. It is possible the links page does have a link pointing to it from another page, but let’s look at that as a separate issue.

3 links page is buried deep in the navigation of the website. Some webmasters bury the link to their links page deep within their site, so that the only way a search engine spider will find the links page is by following 3 or 4 links from the homepage. When this is done, very little (if any) PR flows to the links page. Again, I would not link to a site like this. You wont get much benefit.

4 Multiple links pages bury the page your link is found on.

On some websites, there are so many reciprocal partners, that links are often split across 10s (or even 100s) of pages. For a search engine spider to find the page you are on, it would require following link after link on these links pages until it reaches yours. Again, by the time it gets there, very little (if any) PR will have flowed to the page your link is one.

For points #3 & #4, my advice is simple. Start at the homepage, and see how many clicks it takes you to navigate to the page your link is on. If it is more than 2 clicks away, think carefully about exchanging links. You may not get much out of the deal.

5 a sneaky one here. Check for a robots.txt file on the site that is requesting the link exchange. If there is one, make sure that there is no command that disallows the spiders from accessing the links page. This is a technique that will prevent the search engines spiders from visiting the links page, so no PR, and no benefit, is passed to your site. This is a definite one to avoid.

By Creeper Seo - SEO News Provider

Thursday, August 9, 2007

PHP Ajax Frameworks

  1. AJASON : AJASON is a PHP 5 library and JavaScript client
  2. AjaxAC : AjaxAC is an open-source framework written in PHP
  3. Ajax Agent : powerful open source framework for rapidly building Ajax or Rich Internet Applications (RIA)
  4. Cajax : A PHP class library for writing powerfull reloadless web user interfaces using Ajax (DHTML+server-side) style
  5. CakePHP : Cake is a rapid development framework for PHP which uses commonly known design patterns like ActiveRecord, Association Data Mapping, Front Controller and MVC.
  6. Claw : a convenient and intuitive way of development of PHP5 driven object oriented applications.
  7. DutchPIPE : PHP object-oriented framework to turn sites into real-time, multi-user virtual environments:
  8. Flexible Ajax : Flexible Ajax is a handler to combine the remote scripting technology, also known as AJAX (Asynchronous Javascript and XML), with a php-based backend.
  9. Guava : Groundwork Guava is a PHP-based application framework and environment.
  10. HTML_AJAX : HTML_AJAX is a PEAR package for performing AJAX operations from PHP.
  11. HTSWaf : The HTS Web Application Framework is a PHP and Javascript based framework designed to make simple web applications easy to design and implement.
  12. My-BIC : My-BIC AJAX State of Mind for PHP harmony
  13. PAJAJ : PHP Asynchronous Javascript and JSON
  14. PAJAX : Remote (a)synchronous PHP objects in JavaScript
  15. phpAjaxTags : phpAjaxTags is a port to PHP from java tag library AjaxTags.
  16. PHPWebBuilder : PHPWebBuilder is a PHP framework designed following well-known object oriented designs and principles featuring a highly reusable components architecture, metadata based persistence and traditional GUI style programming support.
  17. Qcodo : open-source PHP 5 framework
  18. Simple AJAX : This tutorial demonstrates how to perform AJAX functionality simply and effectively, using the AJAX JSMX library, coupled with the JSON-PHP library.
  19. symfony : open-source PHP5 web framework
  20. TinyAjax : TinyAjax is a small php5 library that allows you to easily add AJAX-functionality to existing pages
  21. xajax : Ajax-enable your PHP application with a simple toolkit that gets the job done fast.
  22. XOAD : PHP based AJAX/XAP object oriented framework that allows you to create richer web applications
  23. Zoop : oop is an object oriented framework for PHP based on a front controller. It is designed to be very fast and efficient and very nice for the programmer to work with.
  24. Zephyr : zephyr is an ajax based framework for php5 developers.


Source

Wednesday, August 8, 2007

Brand Expert Shares Easy Ways to Raise Your Search Engine Rankings and Get More Traffic to Your Site

SEO experts have protected their secrets for years and convinced us that SEO is mysterious and complex. Not so. Brand expert Erin Ferree of elf design has blown the lid off all the secrecy and revealed that Search Engine Optimization is actually easy. In fact her new book, "Raise Your Ranking," lays out a complete system for small businesses to use to raise their search engine rankings, get more traffic to their websites, save thousands of dollars and have control of their own future.

Belmont, CA 7, 2007 -- Driving quality traffic to a website is one of the most important marketing tasks for small businesses. It is an integral element of brand-building, internet marketing, sales, and defining a business as distinct from the competition. It just doesn't matter what you put on your website if no one visits your site and sees the great products and information you offer. Getting people's attention and enticing them to your website can be accomplished in two ways. One of these methods can become very expensive very quickly. The other method seems to require insider knowledge of super-secret tactics. Hiring someone to work their magic to drive traffic to your site can also be very expensive. The trade winds, however, are shifting.

What, you ask, is this new wind blowing across the nation? It is the unprecedented revelation of the insider small business SEO secrets of a top branding and SEO expert. Erin Ferree, Principal of Elf Design, Inc. of Belmont, California, has broken the code of silence and revealed her very profitable and effective system for Small Business Search Engine Optimization (SEO). "I decided to lift the veil of secrecy," says Ferree. "As a small business owner, I understand the struggles of other small businesses," she says.

Ferree says, "Two primary methods for getting targeted Internet traffic to your website are to use paid Internet advertising, including pay-per-click advertising, on the search engines, and to earn high rankings in the organic search results." Pay-per-click advertising can become expensive, as we all know. Many people also recognize paid ad placements in the search engine listings. They choose the natural (organic) listings and trust the search engines to give highest ranking to the most popular, and presumably the best, sites. "The way to get the visibility and the traffic you want," she says, "is to earn high rankings in the organic search results listings. You capture the attention of a greater number of people and you don't blow your entire marketing budget in one place."

"Small businesses must budget carefully and spend valuable marketing dollars even more carefully," says Ferree. "Fortunately, organic search results are more effective in driving high-quality traffic to your website -- people who are genuinely interested in your products and services." Ferree's Small Business SEO system is geared to help small businesses rank higher in the organic results sections of the search engines' lists. She has developed a winning system of SEO for small businesses. "I've actually just taken everything I know about SEO and funneled it into this new product, Raise Your Ranking. I've made it as easy as possible. In fact, if you can write a sentence, you can do this. With Raise Your Ranking, any small business can successfully position their websites in the search engines for phenomenal success in the rankings, in turn driving more and more traffic to their website." Equally important, with this product, small businesses have the knowledge and the tactics to re-optimize their sites again and again as their business grows, as the market changes, as the words people search shift around.

The release of this product, specifically intended to help small businesses do their own SEO is unprecedented. Raise Your Ranking might be the marketing product of the year. To learn more about the product and about Elf design, visit www.howtoraiseyourranking.com

About Erin Ferree and Elf Design, Inc.
Elf design, founded by Erin Ferree, is a brand identity and graphic design firm that has been helping small businesses grow with bold, clean and effective logo designs for over a decade. Elf design offers the comprehensive graphic and web design services of a large agency, with the one-on-one, personalized attention of an independent design specialist. Elf design works closely with their clients to create designs that are visible, credible and memorable -- and uniquely theirs. For more information about elf design, please visit: http://www.elf-design.com

Tuesday, August 7, 2007

Hi Friend,

Guruji.com has launched a search and win contest,
where you can win a trip to Singapore, a Bajaj Pulsar,
Video iPods and more.

I played the contest and I liked it.
You will like it too. Click on the link below to play.

http://contest.guruji.com/?refid=c99c52124eadbd
358980f82bae5732ca

Guruji - Search Contest

Guruji Contest Registration



Cheers,
krunal

10 Ways to Make Sure Your SEO Goes Out of Its Way for You

Author: Michael Murray

If they want to have success, companies should do everything they can to ensure that their SEO firm doesn't provide lousy service. Here are 10 tips to keep in mind:

1. Be realistic.

Don't waste your time or the SEO firm's expertise by arguing about broad search terms. Don't say you want to be in the Top 10 for "e-commerce." The SEO firm should ask: "E-commerce and what else? E-commerce consultants? Please be specific."

2. Think long-term.

If you can't help yourself and you want broad search terms, such as "toys," think through what it may take to pull that off. Variations on your favorite term may be best in the short term. If you start looking a year or two out, then make sure there aren't site design, programming and link popularity flaws.

3. Be open with log files.

Don't shield log files from the SEO firm. Admit if your web analytics capability is poor. How can the SEO firm do a good job if your host company can't provide decent statistics, such as the number of visitors from search engines and the actual search terms they use?

4. Change text.

If an SEO firm wants to change text, give the consultant lots of room. If a graphic can be modified so the words appear as text, be open-minded about the change. Chances are, it won't hurt the overall look of the web site. SEO professionals grit their teeth when clients say they want rankings and then resist change.

5. Don't sit on recommendations.

You may end up discouraging the SEO firm you're paying if you hire them and then fail to review their suggestions.

6. Reply to e-mails, voicemails and other communications.

If an SEO firm contacts you, especially for a scheduled meeting, make a point to return the e-mail or call. Really, it's a good idea to be available for strategic conference calls.

7. Stick to the program.

Don't ask the SEO firm to optimize the web site and then expect them to provide Pay-Per-Click (PPC) guidance as well. If you can't handle PPC on your own, pay the experts.

8. Keep statistics in perspective.

With many search terms and engines, it's always going to be possible for some keywords not to rank. Don't get hung up on what search terms didn't pop in the Top 30. Focus on your traffic growth and conversions.

9. Know your limits.

SEO firms appreciate informed clients - to a limit. Read the articles. Pick up an SEO book. Keep up with the news. But don't hire an SEO expert and then tell them you're an SEO expert. For example, you may be excited to learn about all of the SEO devices that could be at your disposal. Don't blame the SEO firm for failing to use them all at once. Measured, gradual changes are best.

10. Take your company name out of title tags.

Do yourself a favor and make title tags available for search terms, not your long company name. Only keep it if it's short and useful from a title tag proximity and density standpoint.

Source by seoarticlesweb.com

YAML - Just another powerful and easy markup language

YAML is a human-readable data serialization format that takes concepts from languages such as XML, C, Python, Perl, as well as the format for electronic mail as specified by RFC 2822. YAML was first proposed by Clark Evans in 2001, who designed it together with Ingy döt Net and Oren Ben-Kiki.

YAML is a recursive acronym meaning "YAML Ain't Markup Language". Early in its development, YAML was said to mean "Yet Another Markup Language", retronymed to distinguish its purpose as data-centric, rather than document markup. However since "markup language" is frequently synonymous with data serialization, it is reasonable to consider YAML a lightweight markup language.

YAML syntax is relatively straightforward and was designed to be easily mapped to data types common to most high-level languages (lists, hashes (mappings), and scalar (single value) data.). Its familiar indented outline and lean appearance makes it especially suited for tasks where humans are likely to view or edit data structures, such as configuration files, dumping during debugging, and document headers (e.g. the headers found on most e-mails are very close to YAML in look). Its line and whitespace delimeters make it friendly to ad hoc grep/python/perl/ruby operations. YAML uses a notation based on a set of sigil characters distinct from those used in XML, making the two languages composable. A major part of its accessibility comes from eschewing the use of enclosures like quotation marks, brackets, braces, and open/close-tags which can be hard for the human eye to balance in nested hierarchies.

Data structure hierarchy is maintained by outline indentation. The following YAML document defines a hash with 7 top level keys. One of the keys, "items", contains a 2 element array (or "list"), each element of which is itself a hash with four keys. The "ship-to" hash content is copied from the "bill-to" hash's content as indicated by the anchor(&) and reference(*) labels. An optional "..." can be used at end of a file (useful for signalling an end in streamed communications without closing the pipe). Optional blank lines can be added for readability. The specific number of spaces in the indentation is unimportant as long as the hierarchy order is maintained and parallel elements have the same left justification. Multiple documents can exist in a file and are separated by "---". Notice that strings do not require enclosure in quotations.

Example:


---!myDocument
logEvent: Purchase Invoice
date: 2007-08-06
customer:
given: Dorothy
family: Gale

bill-to: &id001
street: |
123 Tornado Alley
Suite 16
city: East Westville
state: KS

ship-to: *id001

items:
- part_no: A4786
descrip: Water Bucket (Filled)
price: 1.47
quantity: 4

- part_no: E1628
descrip: High Heeled "Ruby" Slippers
price: 100.27
quantity: 1

specialDelivery: >
Follow the Yellow Brick
Road to the Emerald City.
Pay no attention to the
man behind the curtain.
...


Source

Monday, August 6, 2007

10 Steps to Success on the ’Net Without SEO

Search Engine Optimization (SEO) as we know and detest it is obsolete in this day and age. When some while ago Philipp asked me to write an article “How do I optimize websites” I couldn’t do it: The way SEO works in Germany on Google.de can not really work for international websites in English. Moreover nowadays you do not need conventional SEO tactics to have success on the Net or in Google. In order to make a site succeed in these times you have to forget everything you know about on page optimization and link building first.

Now we can start our ten step guide to Google and traffic heaven.

1. Discover your niche
Be different, choose a topic or product that not everybody else already covers or sells. Discovering it is not to be confused with “keyword research” as in conventional SEO. You try to introduce a new niche not just obeying the Google users demands. Try a different angle. Even a very crowded place like SEO itself does have new ones. I am indeed the first blogger to tackle mainly the SEO 2.0 topic.
2. Use Wordpress
Instead of “on page optimization” you can install Wordpress that is search engine friendly out of the box. Wordpress is not only a blog software, you can use it as small scale CMS and it will suffice for most average websites. Also “search engine submission” even with XML sitemaps is not needed anymore with Wordpress. It pings Google Blog Search automatically so your blog posts end up in the Google index just a few hours later.
3. Create a killer CSS design and submit it to CSS galleries
Traditional SEO is all about link building or getting links. Sites doing SEO often look crappy. These days people link web sites that look great just for the sake of the design. Unlike some years ago nowadays CSS and web standards are the best way to design a site. Now it is not difficult to create a great design, especially with Wordpress. Create a killer design for your blog and you will be linked everywhere. Just check out this list at CSS Juice.
4. Allow trackbacks, use dofollow
Blogs thrive in connection with each other. The best way of connecting blogs is the trackback function. Install the dofollow plugin in order not to treat other bloggers like spammers.
5. Socialize, write comments and link other blogs
Link and mention other blogs and bloggers in your posts. Also commenting in other blogs is much appreciated as blogging is not a monologue if it’s done right.
6. Include social media on your site, use social media yourself
Include buttons to your favorite social bookmarking services like del.icio.us or Stumple Upon. Be careful with social news sites like Digg or Reddit. They may crash your server and/or cost you lots of money by driving tons of useless traffic in short periods of time to your site.
7. Write your own content, say something new, express yourself
Write about stuff out of your niche that you know about. Write your own content, do not just post links to other sites. Say something new that wasn’t already said by everybody else. Express yourself, do not repeat yesterday’s news.
8. Compile what you know or what others said and publish it
If everything was said and done already in a particular case, compile it and create a list. Top list are the best solutions as a “200 Wordpress plugins” list is just too big. “10 indispensable Wordpress plugins” is far better.
9. Contribute to your favorite online publications
Do not just publish at your own blog or site. Contribute to other publications that cover your topic. Try your favorite ones first, as you probably know exactly what kind of topic they would like. Most publishers will link to your site
10. Add new content at least every second day
Add new content often enough to create a stable readership. People that visit your site once a month might forget about it. It’s not always necessary to post every day but if you write a real blog do it at least 3 times a week.

As you see most of it is not very spectacular and you probably already do some of it. Moreover no SEO as you know it is really involved until now. If you are not satisfied by now, you probably need some advanced SEO or SEO 2.0 as I call it.

Source by blogoscoped.com

Saturday, August 4, 2007

10 things a PHP IDE has to have

There are so many PHP IDE's out today and it is very hard to choose between them. In my investigations I have found that though there are many they all fall short when it comes to the basic needs of a PHP developer.
Before I continue and review some of those PHP IDE's here is a list of 10 things a PHP coding program has to have.

  1. One-click project creation by choosing a directory with. Too many PHP IDE's have multi-step project creation. Some even have strange functions where you have to add files to the project and delete them. Adding and deleting a file from a project should be as easy as going to the file system and moving or removing the file. I don't want to delete a file from a project and find it hanging out in the project folder later. Some might think this is cool, but it is not.
  2. Local filesystem viewer that shows the filesystem tree without having to enter a drive letter. If I cannot see the file system from the IDE it's uninstall and delete followed by some violent thoughts directed at the software maker.
  3. A PHP debugger that works out of the box with a local webserver. No PHP IDE has this yet. I consider it the holy grail of the PHP software world. No matter how much support software manufactures offer it never covers this aspect of using an IDE enough. Frequently the only reason for purchasing or using an IDE rather than a text editor is to get debugging features.
  4. A HTML toolbar. Why do PHP IDE makers think PHP developers want to type out and can remember all HTML? After all they are buying or downloading the IDE to ease the task of having to type things character by character. CSS is also much more important nowadays as is javascript, they should be included.
  5. Price is in second place after debugging. When you think about it you might see that the top commercial IDE makers are probably guilty of price fixing. Why they think that PHP developers will pay $300 for their software is beyond me. I myself would not pay that kind of money for a Java program that is buggy and runs slow as molasses. You want three hundred bucks? Give me everything on this list in a blinding fast program written in C , Delphi or Visual Basic.
  6. Drag and Drop text that does not bug out when used. All PHP IDE's seem to have this in common. Using drag and drop or marking long rows of text cause jumping, jitter and the disappearance of the pointer. Some even scroll to a "home" area on the screen when too much text is marked.
  7. Fast start times. Okay, let's skip the slow Java debate and go straight to the core. I want my 2.5gz processor to start the IDE in the same time that it can start Word or Open Office. Waiting a minute is ridiculous. Again here commercial vendors may want to take note. If the program costs more than $300, I deduct $10 from the retail price for each second that it takes to start the program.
  8. File backups on save and timed backups of working files. I cannot stress how important this is. Without backups the program becomes a danger to use. I always find myself making several copies of files as I work to give me a stepping back or history capability. I would be nice if an IDE had a savable history or versioning capability. But plain backup is a must.
  9. A TO DO list function. It should be simple with a title and text body. The list should appear per project. I get tired of seeing TO DO lists functions that require that I do more than just jot down the thought in my head.
  10. Intellisense. This is a must. But also one has to wonder why regular HTML is never included in intellisense. Intellisense I feel is being used as an excuse for not including the other things need to produce a proper PHP application
Source by phpopensource.blogspot.com

Friday, August 3, 2007

Submitting your website to DMOZ

Today more than ever, in the field of search engine optimization (SEO), there is a very important step that needs to be taken in order to help a website's visibility in the major search engines. That important step is to submit it to DMOZ, or sometimes called the Open Directory Project or ODP.

DMOZ provides a lot of search results for a good percentage of the most important search engines and directories, including Google. First, DMOZ is NOT a robot-driven crawler but rather a large, human-edited directory of the Web. For any submission to be successful, a few important points need to be taken ahead of time:

Step A)
Your full contact information needs to be there. Make certain that your full contact information is easily accessible, preferably with the help of a clearly identified contact button. An e-mail address is certainly not enough. Many ODP editors will tell you if they don't see a real physical or postal address or telephone number, then that website in its particular category is usually tossed away and probably will never make it inside the directory.

Most importantly, if you are wishing to sell anything, you need to build credibility and honesty with your clients. In such a case, giving proper and full contact information on the site is imperative.

Step B)
Do not attempt to SPAM the directory. You should only submit your site once and forget it for at least two to three months. According to DMOZ rules and regulations, you are only allowed to submit to one category. However, in certain isolated cases and if your website happens to be a very large one and offers lots of information, you may be able to submit a second section of it to a different category. As a rule of thumb, it usually takes time for most submissions to be processed.

This is especially true of categories where there are many daily submissions. It is not recommended to submit a website more than once, as it could end up on the lower bottom of the large list of sites to be reviewed and approved, since they are processed according to their submissions dates.

Step C)
Your website needs original and good content. During the course of your work, if you are only trying to publish an assortment of affiliate links or if your site happens to be a "mirror-site" of other websites that are plentiful on the Internet, then you are increasing your chances of your submission being rejected.

If in fact you really have to deal with affiliate products or services, we recommend that you add lots of new content, perhaps a product review category, an industry news section or any other additional information that will tell the DMOZ editors that your site has something original to offer and has lots of great content that will be of good use to their users.

Step D)
Double-check your website for spelling errors or typos. As much as the DMOZ editors are looking for great content, all are only human and will probably be irritated by some typos or spelling mistakes. Our experience with the ODP tells us that professionally written and carefully built websites with great content, usually always make it into the directory eventually.

Step E)
Keep good records of your submission to DMOZ. We strongly recommend in keeping a complete record of the date a website was submitted to the Open Directory Project and to which particular category it was submitted to. If the category you want to submit to has an editor, you should always make a note of who that editor is. Such information would be useful if later you need to inquire about the status of your submission.

Some of you might ask: "How long does it take to get listed?" Recently, we had one site listed within three weeks of submission and, on other less fortunate occasions, we waited over six months for other sites. It is extremely hard to predict anything.

Step F)
Select the proper category for any submission. In Google or Alta-Vista, when people submit a URL to such robotic search engines, there really is not much to think about, since their crawlers or "spiders" will visit and index your site automatically, normally over a rather short period of time. However, when submitting to a directory such as DMOZ, a critical part of that submission process is choosing the right category. One good thing that is recommended is to go online and look where other websites similar to yours have been placed in the directory.

When you get to the category that you think is best, press the "add URL" button. In other categories, sometimes the DMOZ editors might put a note mentioning certain restrictions to that category. It is recommended that these notes be read carefully and that you don't submit to these restricted categories if your site doesn't meet the parameters mentioned.

Step G)
Always contact DMOZ through the proper channels. Finally, a word of caution: if the category where you want to submit does have an editor, it will usually be written at the bottom of the page and you normally should be able to send that editor a message. There is another way to contact the DMOZ editors through their online forum.

Once there, you can ask about the status of your submission, but you must always give them the category and submission date of your last attempt. Additionally, you can always ask a few questions about general DMOZ procedures and rules.

Try your best to meet their rules and regulations and normally your site should eventually be included in their directory.

Source by rankforsales.com

Google's Matt Cutts: do not stuff your keywords

Keyword stuffing is one of the oldest spamming techniques on the Internet. Many webmasters still use that technique although most search engines can detect it nowadays.

Last week, Google's anti-spam engineer Matt Cutts made fun of a website that used keyword stuffing. Apart from the rather dubious content of the web page, the webmaster included a very long list of related and unrelated keywords in a small text box at the end of the page.

Google doesn't like keyword stuffing

"Keyword stuffing is considered to be an unethical search engine optimization (SEO) technique.

Keyword stuffing occurs when a web page is loaded with keywords in the meta tags or in content. The repetition of words in meta tags may explain why many search engines no longer use these tags." (Wikipedia definition)

Google doesn't like keyword stuffing at all. If Google detects keyword stuffing on a web page, that page will be banned from Google's index. Google's Matt Cutts puts it that way:

"Webmasters are free to do what they want on their own sites, but Google reserves the right to do what we think is best to maintain the relevance of our search results, and that includes taking action on keyword stuffing."

You might use keyword stuffing on your web pages without knowing it

While most keyword stuffing is done intentional, it can also happen that your web pages trigger Google's spam filters although you didn't want to spam.

For example, if you have very similar keywords that are used often on your web pages, this might look like keyword stuffing.

How to avoid unintentional keyword stuffing

If you're unsure if you use a certain keyword too often on a web page then use IBP's optimizer tool. The optimizer tool will analyze your web pages and compare it to the web pages that currently have a top 10 ranking for that keyword.

IBP will tell you in plain English sentences how often you should use your keywords on your web page so that your site can get top 10 rankings. IBP will also tell you in which web page elements you should put the keywords (and how often) so that you get the best results.

Spamming search engines is not a good idea. Although most spam techniques will work for some time, all of them will get your website banned sooner or later. Better focus on ethical search engine optimization methods to get lasting results.

Source by free-seo-news.com

Thursday, August 2, 2007

Duplicate Content is one of the most perplexing problems in SEO.

15 things about how Google handles duplicate content.

1. Google’s standard response is to filter out duplicate pages, and only show one page with a given set of content in its search results.

2. I have seen in the SERPs evidence that large media companies seem to be able to show copies of press releases and do not get filtered out.

3. Google rarely penalizes sites for duplicate content. Their view is that it is usually inadvertent.

4. There are cases where Google does penalize. This takes some egregious act, or the implementation of a site that is seen as having little end user value.
I have seen instances of algorithmically applied penalties for sites with large amounts of duplicate content.

5. An example of a site that adds little value is a thin affiliate site, which is a site that uses copies of third party content for the great majority of its content, and exists to get search traffic and promote affiliate programs. If this is your site, Google may well seek to penalize you.

6. Google does a good job of handling foreign language versions of site. They will most likely not see a Spanish language version and an English language versions of sites as duplicates of one another.

7. A tougher problem is US and UK variants of sites (”color” v.s. “colour”). The best way to handle this is with in-country hosting to make it easier for them to detect that.

8. Google recommends that you use Noindex metatags or robots.txt to help identify duplicate pages you don’t want indexed. For example, you might use this with “Print” versions of pages you have on your site.

9. Vanessa Fox indicated in her Duplicate Content Summit at SMX that Google will not punish a site for implementing NoFollow links to a large number of internal site links. However, the recommendation is still that you should use robots.txt or NoIndex metatags.

10. When Google comes to your site, they have in mind a number of pages that they are going to crawl. One of the costs of duplicate content is that when the crawler loads a duplicate page, one that they are not going to index, they have loaded that page instead of a page that they might index. This is a big downside to duplicate content if your site is not (more) fully indexed as a result.

11. I also believe that duplicate content pages cause internal bleeding of page rank. In other words, link juice passed to pages that are duplicates is wasted, and this is better passed on to other pages.

12. Google finds it easy to detect certain types of duplicate content, such as print pages, archive pages in blogs, and thin affiliates. These are usually recognized as being inadvertent

13. They are still working on RSS feeds and the best way to keep them from showing up as duplicate content. The acquisition of FeedBurner will likely speed the resolution of that issue.

14. One key think they use as a signal as to what page to select from a group of duplicates, is that they look at and see what page is linked to the most.

15. Lastly, if you are doing a search and you DO want to see duplicate content results, just do your search, get the results, and append the “&filter=0″ parameter to the end of your search results and refresh the page.

Source by creeper-seo.com - Seo News Source

Wednesday, August 1, 2007

What are the Effects of Two Addresses in the Footer of a Website?

A web designer at Cre8asite Forums has an interesting predicament. She has a client who has a local store and a headquarters both located in two different states. Would it be bad to put two addresses in the footer of the website? Could it negatively affect organic and local rankings where it previously helped?

Nobody knows for sure. One member suggests that you should not put two addresses in the footer but the other address should be posted somewhere, like on the Contact Us page.

Or you can use Google Trends to see which location is more popular. Ultimately, the visitors come first.

But moderator EGOL says that this is a good question to experiment upon.

You have a chance to do a great experiment here…. run analytics to see what search queries come in for the current state, then tally the google rankings for those queries, and then run rankings for matching queries for the new state…. upload the new footer and see what happens to the ranks.

This reminds me of adding your address to Google’s Local Business Center, which can also help.

Forum discussion continues at Cre8asite Forums.

Tuesday, July 31, 2007

How your competitors can sabotage your website rankings

Your competitors also know that and and some of them might not use ethical business practices. There are some things that your competitors might do to sabotage your search engine rankings:

1. Your competitors might create spam under your name

All major search engines use links to calculate the ranking of web pages. It's not only the number of links that counts but also the quality.

Your competitor might add your website to several spam linking schemes to hurt your site.

In addition, your competitor might use your website URL for spamming in online forums, social network sites and blog comments. Although it's not you who is spamming the websites, it will be hard to prove that you're innocent and social network sites might ban your website (which will have a negative effect on the link structure of your site).

2. Your competitors might peach on you

Did you buy links on other websites to improve your search engine rankings? Google doesn't like that at all. If your competitor finds out that you use paid links he might tell Google and your rankings might drop.

The same can happen if you use any unethical SEO method on your website (hidden text, cloaking, etc.). If your competitor finds out and informs Google then it's likely that it will affect your search engine rankings.

3. Your competitor might send a copyright complaint

If a search engine has been notified about a copyright infringement on your website then the search engine must remove the page from its index for 10 days. If your competitor files a copyright complaint against you then your website can be temporarily removed from the search results.

4. Your competitor might create duplicate content

Search engines don't like duplicate content. If more than one web page has the same content then search engines will pick one page and drop the rest.

If your competitor creates duplicates of your web page content then these duplicates might get better rankings than your own site. Of course, this can cause legal problems for the person who duplicates the content (as all methods mentioned in this article).

You cannot avoid that unethical competitors spam other sites with your name but you can avoid being banned for using spam techniques on your web pages. Only use ethical search engine optimization methods to get high rankings on Google and other major search engines.

Source by free-seo-news.com

Saturday, July 28, 2007

Google’s Supplemental Index

The Big Daddy update of late 2005 to early 2006 was largely about installing a new Supplemental index. The new version is so different to the old version that it shouldn’t now be called the Supplemental index. The old Supplemental index was a repository for garbage webpages and such, and was accessed for the search results only when a reasonable number of results couldn’t be found in the regular index. The new version is very different because many millions of perfectly good pages are put in it.

Many, perhaps most, websites have plenty of their pages in the Supplemental index because their linkage profiles don’t score well enough. Even Google has pages in there - hundreds of thousand of them. A site’s linkage profile is an evaluation of the links into and out of the site. Things like linking to off-topic sites, and too high a percentage of a site’s inbound links being reciprocals, lowers the score of a site’s linkage profile, and reduces the number of pages that it can have in the Regular index, which means that more of its pages are placed in the Supplemental index. Improving the linkage profile brings pages out of the Supplemental index and into the Regular one.

Before Big Daddy, pages in the Supplemental index had been given the kiss of death - they rarely came out, and were rarely seen in the search results. But that has changed, and is continuing to change. It is now possible to bring pages out of the Supplemental index by getting some good links to the site, and the continued improvement is in the way that the Supplemental index is used by Google’s system.

Right now, most of the datacenters are using the new Supplemental index in the same way as the old one was used; i.e. get a results set from the Regular index and, if the set isn’t large enough, add to it from the Supplemental index. The quality of the results from the Regular index doesn’t come into it. If the results set is large enough, the Supplemental index is ignored.

But at least one datacenter operates differently. It operates along the lines of, get a results set from the Regular index. Sometimes many of those results will be poor quality matches (e.g. they only match one word of a three word query), so get some better matches from the Supplemental index. The use of the Supplemental index in a way something like this is likely to spread across the datacenters in 2007.

The new way makes a lot of sense. Since many of the results that are acquired from the Regular index are often poor matches for the query, and since millions of perfectly good pages are now stored in the Supplemental index, some of which will be good matches for many queries, it makes good sense to pull results from the Supplemental index when there are some poor matches from the Regular index.

It’s good news for website owners who have large numbers of pages in the Supplemental index. As the new way of operating spreads, more of their pages will rightly find their way into the search results, even though they are in the Supplemental index.

Source by www.webworkshop.net

Friday, July 27, 2007

12 Ways Webmasters Create Duplicate Content

At the start of this session, the search engines all talked about various types of duplicate content. But let’s take a deeper look at the way that duplicate content happens. Here are 12 ways people unintentionally create dupe content:

  1. Build a site for the sole purpose of promoting affiliate offers, and use the canned text supplied by the agency managing the affiliate program.
  2. Generate lots of pages with little unique text. Weak directory sites could be an example of this.
  3. Use a CMS that allows multiple URLs to refer to the same content. For example, do you have a dynamic site where http://www.yoursite.com/level1id/level2id pulls up the exact same content as http://www.yoursite.com/level2id? If so, you have duplicate content. This is made worse if your site actually refers to these pages using multiple methods. A surprising number of large sites do this.
  4. Use a CMS that resolves sub domains to your main domain. As with the prior point, a surprising number of large sites have this problem as well.
  5. Generate pages that differ only by simple word substitutions. The classic example of this is to generate pages for blue widgets for each state where the only difference between the pages is a simple word substitution (e.g. Alabama Blue Widgets, Arizona Blue Widgets, …).
  6. Forget to implement a canonical redirect. For example, not 301 redirecting http://yoursite.com to http://www.yoursite.com (or vice versa) for all the pages on your site. Regardless of which form you pick to be the preferred form of URL for your site, someone out there will link to the other form, so implementing the 301 redirect will eliminate that duplicate content problem for you, as well as consolidate all the page rank from your inbound links.
  7. Having your on site links back to your home page link to http://www.yoursite.com/index.html (or index.htm, or index.shtml, or …). Since most of the rest of the world will link to http://www.yoursite.com, you now have created duplicate content, and divided your page rank, if you have done this.
  8. Implement printer pages, but not using robots.txt to keep them from being crawled.
  9. Implement archive pages, but not using robots.txt to keep them from being crawled.
  10. Using Session ID parameters on your URLs. This means every time the crawler comes to your site it thinks it is seeing different pages.
  11. Implement parameters on your URLs for other tracking related purposes. One of the most popular is to implement an affiliate program. The search engine will see http://www.yoursite.com?affid=1234 as a duplicate of http://www.yoursite.com. This is made worse if you leave the “affid” on the URL throughout the user’s visit to your site. A better solution is to remove the ID when they arrive at the site, after storing the affiliate information in a cookie. Note that I have seen a case where an affiliate had a strong enough site that http://www.yoursite.com?affid=1234 started showing up in the search engines rather than http://www.yoursite.com (NOT good).
  12. Implement a site where parameters on URLs are ignored. If you, or someone else, links to your site with a parameter on the URL, it will look like dupe content.

There are many ways that people intentionally create duplicate content, by various scraping techniques, but there is no need to cover that here.

Source by stonetemple.com

Wednesday, July 25, 2007

ViewState and JavaBeans

ViewState and JavaBeans


Summary: Learn how ViewState in ASP.NET makes validation easier than using JavaBeans in JSP, by automatically persisting form data and returning it to the user if validation fails.

Introduction

Submission of a Web form is usually a two-stage process. The first stage is to validate the content of the form fields to ensure that it is valid and falls within the allowed limitations of the data structure. The second stage, submitting the form to an underlying application for processing, occurs only after validation has been successful. In this way, developers can be sure that the processing application will be called only once, and that it will always receive data it knows how to handle.

In most cases, validation is accomplished by having the form submit back to itself for validation. That is, if the form is on a page called Register.jsp, clicking the Submit button will send the form data to Register.jsp. Register.jsp will contain not only the HTML for the form itself, but also JavaScript code to examine each submitted field in the form and determine whether or not it is valid.

Similarly, in Microsoft® ASP.NET, all forms are posted back to the current .aspx page for validation. This process is called POSTBACK. When the page is requested for the first time, there is no additional information sent in the POST request and therefore the form appears blank. When the form is filled out and the Submit button is clicked, the same .aspx page is requested for a second time. This time, however, there are additional parameters included in the POST request (the values of the fields); the server recognizes this and performs validation on those parameters, forwarding to the appropriate page if they are all, indeed, valid.

But what happens if the form isn't valid? In both JSP and ASP.NET, we will want to redisplay the page for the user so that they can correct errors in invalid fields. However, we don't (usually) want the user to have to re-enter all the form data from scratch. So how do we choose to maintain data in some or all of the form fields, even after the page is reloaded?

In this article, we will discuss various ways of persisting form data after submission. We will examine the most commonly used techniques in both JSP and ASP.NET, and then look at how ASP.NET can be used to simplify the entire process, abstracting it almost completely into the background.

Means of Persisting Form Data

There are many ways of persisting form data after the form has been submitted. Some of the more popular ways include the following:

  • The values of form fields can be stored in the Session object. This is what we do in the CodeNotes Web site; each user of the site has a unique session ID that identifies him or her and allows user data to persist throughout a visit to the site. Data can be added to the Session object with a line like this (in ASP.NET):
·                Session("Name") = "Bob Jones";

Session information can be stored in various locations: inside the ASP.NET runtime process, inside a dedicated Microsoft Windows® service, or inside a Microsoft SQL Server™ database. However, using the Session object, in any of these locations, is costly in server memory. In addition, you have to read the values out of session and put them back into the form on each page load. This routine code bulks up your pages.

  • Cookies are another way of persisting data during (and between) user visits to a Web application. Unlike the Session object, cookies are stored on the individual user's machine, and are requested by the application itself each time the user visits. Cookies, however, take additional development time, and also require that all users have cookies enabled in their browsers—something that many people choose not to do for security reasons.
  • Another option to persisting data is to duplicate your form content in a hidden field that is posted back to the server. The server can then parse the hidden field and rewrite the page HTML by inserting the previously entered values. Hidden fields, like cookies and session storage, require additional "plumbing code" and can be difficult to maintain if the form changes even slightly.
  • One of the most popular methods of persisting form data in JSP is by using accessory data objects, such as JavaBeans. In the next section, we will discuss what JavaBeans are, how JavaBeans are used in a simple JSP application to persist form data, and look at an example of such an application.

JavaBeans and JSP

Although we store information in the Session object on codenotes.com, a more "proper" JSP alternative is to use JavaBeans. This involves designing a JavaBean class to represent the data structure of a form, and then accessing the bean when needed from a JSP page using a special syntax.

What are JavaBeans?

A JavaBean is a Java class that has member variables (properties) exposed via get and set methods. JavaBeans can be used for almost any purpose, from visual components to data elements. With regards to JSP, JavaBeans are generally data objects, and they follow some common conventions:

  • The Java class is named SomethingBean and should optionally implement the Serializable marker interface. This interface is important for beans that are attached to a Session object that must maintain state in a clustered environment, or if the JavaBean will be passed to an Enterprise JavaBean (EJB).
  • Each bean must have a constructor that has no arguments. Generally, the member variables are initialized in this constructor.
  • Bean properties consist of a member variable, plus at least one get method and/or a set method for the variable. Boolean values may use an is method instead of get (for example, isConfirmed()).
  • The member variables commonly have a lowercase first letter in the first word, with subsequent words capitalized (for example, firstName). The get and set methods are named getPropertyName and setPropertyName, where the property name matches the variable name (for example, getFirstName).
  • Get and set accessor methods often perform operations as simple as returning or setting the member variable, but can also perform complex validation, extract data from the database, or carry out any other task that acts on the member variables.

JavaBean Syntax

A simple JavaBean class might look something like Listing 1.

Listing 1. Simple JavaBean class (UserBean)

package com.codenotes.UserBean;
 
public class UserBean() {
   private String firstName;
   private String lastName;
 
   //default constructor
   public UserBean{
      this.firstName = "";
      this.lastName = "";
   }
 
   //get methods
   public String getFirstName() {return firstName;}
   public String getLastName() {return lastName;}
 
   //set methods
   public void setFirstName(String firstName) {
      this.firstName = firstName;
   }
 
   public void setLastName(String lastName) {
      this.lastName = lastName;
   }
}

This class has get and set methods for two fields: firstName and lastName. Notice that this class exactly follows the conventions listed previously.

To use the Bean from a JSP script, we need only add the code in Listing 2 to the top of the JSP.

Listing 2. Using UserBean

   id="UserBean"
   class="com.codenotes.UserBean"
   scope="session"/>
   property="*"/>

The element creates an instance of the UserBean class and assigns it session scope, which means it will remain available until the end of a user's entire session with your Web application. The element, in this case, populates the data structure of the JavaBean with the information in the Request object. Note that this will only work if the fields in the request exactly match fields in the JavaBean.

We can now access the data stored in the bean from anywhere in the JSP, no matter how many times it is reloaded, by using code like Listing 3.

Listing 3. Getting values from a JavaBean

The JSP processor automatically interprets getProperty and setProperty methods and calls the appropriate methods in the Bean class itself.

Server-side Validation Using JavaBeans

In an ordinary HTML page, the only way to validate user input is on the client side using JavaScript. However, client side validation can be problematic as it depends on the client's browser properly implementing your JavaScript code. In addition, a malicious user can easily download your page, make modifications to disable the JavaScript and work around your validation.

Using JavaBean tags, however, you can easily make a "validation bean" that will perform secure server-side validation on your data entry fields. Once you are sure that the data is valid, you can transfer it from this bean to any back-end system, such as a database or EJB. The validation bean thus becomes an intermediate step which helps secure your Web forms without requiring a significant modification to your middle tier or back end data systems.

ValidationBean

ValidationBean might look something like Listing 4.

Listing 4. A ValidationBean

package com.codenotes;
 
import java.util.Vector;
 
public class ValidationBean {
   private String m_email = "";
   private String m_name = "";
   private int m_age = 0;
 
   private Vector messages = new Vector();
 
   public ValidationBean() {
      m_email = "";
      m_name = "";
      m_age = 0;
   }
 
   public String getEmail() {return m_email;}
   public void setEmail(String email) {m_email = email;}
   public void isValidEmail() {
      //check for @ symbol somewhere in string
      if ((m_email.length() == 0) || (m_email.indexOf("@") <>
         messages.add("Enter a valid email.");
      }
   }
 
   public String getName() {return m_name;}
   public void setName(String name) {m_name = name;}
   public void isValidName() {
      //check if name exists
      if (m_name.length() <>
         messages.add("Name is required");
      }
   }
 
   public int getAge() {return m_age;}
   public void setAge(int age) {m_age = age;}
   public void isValidAge() {
      //must be at least 18 years old
      if (m_age <>
         messages.add("You must be 18 years old to register.");
      }
   }
 
   public String[] getMessages() {
      isValidName();
      isValidAge();
      isValidEmail();
      return (String[])messages.toArray(new String[0]);
   }
 
}

The code in Listing 4 contains some interesting features. First, you should note that every bean you make for use in a JSP should be assigned to a package. If you don't assign the bean to a package, most servlet containers will automatically assume it's part of the automatically created package when the JSP is compiled. This problem also occurs with custom tag handlers.

Second, although the isValidXXX() functions traditionally return a Boolean value, in our case we have chosen to simply add a message to our message vector instead. The isValidXXX() functions are meant to be called internally. From the JSP, we simply call getMessages() and check the length. If any messages are present, then some data is invalid.

Finally, if we wanted a more advanced sort of validation, we can easily expand the logic in each of the isValidXXXX() methods. For example, we could set the email field to be valid if it is missing, or it has the proper format (in other words, the field would be optional).

Using ValidationBean

The JSP code itself will be very similar to that discussed in the previous section. Listing 5 shows an example of a form using ValidationBean.

Listing 5. Form using ValidationBean

<% response.setDateHeader("Expires", 0); %>
 
   class="com.codenotes.ValidationBean">
   
      property="*"/>
 
    
      
      <% String[] messages = validBean.getMessages();
      if (messages != null && messages.length > 0) {
         %>
         Please change the following fields:
         
            <%for (int i=0; i <>
            out.println("
  • " + messages[i] +
  •             "");
                }%>
             
          <% } else {
             //Valid form!
             //transfer data from validation bean to memory
             session.setAttribute("name", validBean.getName());
     
     
             //then forward to next page
             %>
             page="CompleteForm.jsp" />
          <% } %>
     
          
             method="post">
             
                value='
                property="name"/>' />
             Name 
     
     
             
                value='
                property="age" />' />
             Age 
     
     
             
                value='
                property="email" />' />
             Email 
     
     
             
             />
     
     
          
       

    The JSP page in Listing 5 performs the following actions:

    1. Populates an instance of ValidationBean with data from the Request object.
    2. Queries the bean to see if it has any messages in its messages vector.
      • If there are error messages in the vector, the page displays a list of the error messages and redisplays the form using the data from the Bean to fill out the fields, where available.
      • If there are no error messages in the vector, the Bean is added to the user's current session, and the user is then forwarded to the next appropriate page.

    Note that we have to differentiate between single quotes and double quotes within each tag. If we don't switch quote types, the servlet container may become confused and parse the tags incorrectly.

    Using server-side validation offers many advantages over JavaScript and client-side code. Although you do have to write more boilerplate code in developing your JavaBeans, the resulting savings and simplicity in your JSP more than make up for it.

    JavaBeans Example

    As mentioned previously, the CodeNotes Web site uses the Session object instead of JavaBeans to persist form data, so there is no example of JavaBeans we can extract from the CodeNotes site. Instead, for the example in this article, we will design a simple Registration form similar to the one located at http://www.codenotes.com/login/registerAction.aspx, except that it uses JavaBeans instead of Session. In the next two major sections of this article, we will convert the example to ASP.NET using the Java Language Conversion Assistant (JLCA) to see how conversion of JavaBeans works, and then we will design a brand new Registration form in ASP.NET and show how new features in ASP.NET make persisting form data trivial.

    Note that to keep this example simple, we will not do any sort of validation on form data.

    UserBean

    UserBean is a straightforward JavaBean with basic getters and setters for the data in the form. The only thing to note is that we've put it in package codenotes. JavaBeans should always be placed in packages; otherwise, the servlet processor will assume they are in a default package and won't be able to find them there. Listing 6 shows the code for UserBean.java.

    Listing 6. UserBean.java

    package codenotes;
     
    public class UserBean implements java.io.Serializable {
     
       private String userName;
       private String password;
       private String firstName;
       private String lastName;
       private String displayName;
     
       public UserBean() {
          this.userName="";
     
          this.password="";
          this.firstName="";
          this.lastName="";
          this.displayName="";
       }
     
       public String getUserName() {return userName;}
       public String getPassword() {return password;}
       public String getFirstName() {return firstName;}
       public String getLastName() {return lastName;}
       public String getDisplayName() {return displayName;}
     
     
       public void setUserName(String userName) {
          this.userName=userName;
       }
     
       public void setPassword(String password) {
          this.password=password;
       }
     
       public void setFirstName(String firstName) {
          this.firstName=firstName;
       }
     
       public void setLastName(String lastName) {
          this.lastName=lastName;
       }
     
       public void setDisplayName(String displayName) {
          this.displayName=displayName;
       }
    }

    Register.htm

    Register.htm is a simple HTML file that will contain the form for the user to fill out. It contains no special tags or JavaScript code of any kind; it simply uses Register.jsp (described in the next section) as its target ACTION. We're also going to use HTTP GET instead of POST, so you can see the parameters in the URL. Listing 7 shows Register.htm.

    Listing 7. Register.htm

       
           

    Registration Form

           
              action="welcome.jsp" method="get" id="regForm"
              name="regForm">
              
                 
                    
                    
                 
                 
                    
                    
                 
                 
                    
                    
                 
                 
                    
                    
                 
                 
                    
                    
                 
                 
                    
                    
                 
                 
                    
                 
              
    UserName/Email:
    Password:
    Re-enter Password:
    FirstName:
    LastName:
    DisplayName:
                       
                    
           
       

    Welcome.jsp

    Finally, on Welcome.jsp we simply display the data that was entered into the form on the page. This is where we populate a UserBean instance with the results of the form submission, and then use tags to access the information from the Bean, when needed. Listing 8 shows Welcome.jsp.

    Listing 8. Register.jsp

       id="UserBean" class="codenotes.UserBean"
       scope="session" />
       name="UserBean" property="*" />
       
          

    Welcome!

          

    Your registration data:

          
             
                
                
             
             
                
                
             
             
                
     
                
             
             
                
     
                
             
             
                
     
                
             
          
    User Name:
                   property="userName" />
    Password:
                   property="password" />
    First Name:
                   property="firstName" />
    Last Name:
                   property="lastName" />
    Display Name:
                   property="displayName" />
       

    Converting JSP to ASP.NET by Using JLCA

    The Java Language Conversion Assistant (JLCA) converts a JavaBean class into a Microsoft® .NET class that implements System.Runtime.Serialization.ISerializable. For each pair of get and set methods in the original Bean class, JLCA creates a property (virtual public object) with get and set accessors. An ASP.NET page can then create an instance of the serializable class and access its properties through that instance.

    We'll convert the UserBean example from the previous section to an ASP.NET application. If you run the conversion wizard on the application found in the jlcademo.msi, you should get an almost "perfect" conversion, with no warnings or errors at all.

    JLCA will leave register.htm as is, and will convert UserBean.java to a C# file named UserBean.cs. Examining UserBean.cs, we can see how each of UserBean's get/set pairs has been converted to a property, like the one shown in Listing 9.

    Listing 9. A UserBean property

    virtual public System.String UserName
    {
       get
       {
          return userName;
       }
     
       set
       {
          this.userName = value;
       }
     
    }

    In order for this class to compile correctly, however, you will need to implement a method called getObjectData(), which is required of any class that implements the ISerializable interface. You don't need to write any code for this. Simply do the following:

    1. Switch to the Class View instead of the Solution Explorer.
    2. Expand beansConvcodenotesUserBeanBases and Interfaces.
    3. Right-click ISerializable, and then click AddImplement Interface.

    This will add the necessary implementation code for getObjectData() to your class, and conceal it within a region so you don't have to worry about it.

    The welcome.jsp file from the previous example converts perfectly into welcome.aspx, so you don't need to make any changes there. In fact, all you need to do now to get the application running is to right-click register.htm in the Solution Explorer, and then click Set as Start Page. After that, you can run the application and see that it is functionally identical to the JavaBean example from the previous section.

    One thing you may notice is that JLCA has added a significant amount of additional code to the beginning of welcome.aspx. This code does two things:

    1. Checks to see if the user's current session already contains an instance of UserBean. If it doesn't, it creates a new, empty UserBean.
    2. Populates the UserBean with values from the Request object, if there are any. Because UserBean.cs implements ISerializable, it is able to populate a collection representing the properties of the Bean and then cycle through them, adding the correct value from the Request object to the correct property.

    This code replaces the JSP container code that performed the same actions when a tag was encountered. As you can see, the JLCA does its best to ensure that your converted application remains as faithful as possible to the functionality of the original JSP code.

    The ASP.NET Alternative

    Instead of using an accessory data object like a JavaBean to store field data during validation, ASP.NET adds a special hidden field named __VIEWSTATE to the generated source for every form. This hidden field stores the state of all controls on the page, such as the text entered in a text box, whether checkboxes are checked or unchecked, the contents and selected items in list boxes, and so on. Therefore, there is no need for you to add additional code to persist field values each time a Submit button is clicked and validation is performed.

    Traditionally, many ASP applications used standard hidden fields to store field data between validation attempts. ViewState alleviates several problems with normal hidden fields, including:

    1. Normally, extra code was required to put field values into hidden fields for storage upon submission. ASP.NET does not require any extra code, as it automatically serializes the values of all fields on the page into a single __VIEWSTATE hidden field.
    2. The names and content of hidden fields are usually easily readable in the source code for an HTML form. The __VIEWSTATE field, on the other hand, is encoded using a complex hash scheme and is unreadable to humans. Only allowed applications will be able to decrypt the __VIEWSTATE field and extract values from its contents.
    3. You can specifically identify which controls should or shouldn't be included in the __VIEWSTATE field, and even assign a page level directive to disable ViewState if you choose. All of these settings are controlled with simple properties, and no coding is required.

    Although the use of the __VIEWSTATE field in ASP.NET may seem like an internal implementation detail of the Framework, it can tangibly influence the performance of your applications. Every time the server must update a page, the contents of the form on the page are actually sent to the client twice; first, as regular HTML, and then as encoded values in the __VIEWSTATE field. If an application has many controls that contain a lot of information, this hidden field can grow to sizable proportions. Because this field is transmitted as part of the response to a client in ASP.NET, it can adversely affect transmission time.

    Using ViewState

    By default, most controls in ASP.NET automatically have ViewState enabled. This means that you don't have to do anything special in order to have them persist data between validation attempts.

    Let's look at an example. We'll create a simple ASP.NET application that uses several common controls. As usual, we can simply drag components onto the Design View of the Web form. There's no need to add any additional code. In this case, we're going to add a text box, a calendar, and a button.

    If you select any one of the controls and examine its enableViewState property in the properties box, you will see that, by default, it is set to true. This means that the state of the control will be stored in the hidden __VIEWSTATE field, and will be persisted even after clicking the Go! button.

    Run the application. Type your name in the field and select a date on the calendar. Then click Go! Since we didn't add any code to the button, the form will simply post and refresh on the same page, but as you will see, the text you entered in the field and the date you selected on the calendar remain exactly as they were. You can examine the content of the __VIEWSTATE field by right-clicking on the page, and then clicking View Source. Somewhere near the top of the page, you'll see an element that looks like the one in Listing 10.

    Listing 10. A __VIEWSTATE example

      value="dDw3MDY2NzMxNDI7dDw7bDxp...7Pj47Phdg9e+N3tG/uHE9I7KBRj6NR9Oe"/>

    This element contains information on the state of each control in the page. You can watch it change, for example, by modifying the date you selected in the calendar.

    Disabling ViewState

    If, for some reason, you don't want an element to persist its state between posts, it is very easy to disable the ViewState on that element. Simply change the value of the enableViewState property for a particular element to "false" instead of "true." Try disabling ViewState on both the textbox and the calendar and then running the example application again. Type some text in the field and select a date as before, and then click Go!

    What happens? The calendar resets itself to its original state, and the date you selected is lost. Because you have configured your calendar not to store its content in __VIEWSTATE, the application will have no memory of any manipulation of that control after a postback to the server.

    However, you may have noticed that the text in your text box remained, even though you disabled ViewState for it as well. This is because certain simple control properties (like the text content of a text box) can be stored as basic text, and therefore ViewState is not necessary (and thus not implemented) to persist the values. ASP.NET simply obtains the value from the Request object instead. However, if you were to dynamically set a more complex property of the text box, such as its background color (for example, with a Change Color button), that change would be lost if ViewState were disabled on the text box.

    Note that you can also disable ViewState for an entire ASP.NET page by clicking on the form and changing the value of its enableViewState property to "false."

    enableViewStateMac

    If you looked at the enableViewState property of an ASP.NET form, you may also have noticed a property below it named enableViewStateMac. MAC stands for Machine Authentication Code. When this property is set to true, ASP.NET will include a machine authentication code in the __VIEWSTATE field. This prevents tampering, as it ensures that only the machine that encoded the __VIEWSTATE field in the first place can decode it and determine field values from it. There is generally no reason to turn this feature off.

    Advantages of Using ViewState

    The primary advantages of the ViewState feature in ASP.NET are:

    • Simplicity. There is no need to write or generate complex JavaBean classes in order to store form data between submissions. ASP.NET does everything for you automatically, and you can simply turn ViewState off if you don't want to use it for a particular control. Basically, persistence is all done in the background.
    • Flexibility. It's easy to configure ViewState on a control-by-control basis, so you can have certain fields maintain themselves so that the user does not have to re-enter them, and have other fields reset every time to ensure that the user enters them correctly. There is no need to ensure that the data submitted by your form fits a particular data structure, as the __VIEWSTATE field is encoded and decoded on the fly, with all the information in the correct location and order.

    Limitations of Using ViewState

    The primary limitations of ViewState are:

    • You can't transfer ViewState information from page to page. With JavaBeans, you can simply store the JavaBean in the session and then access it again from somewhere else. This is not possible with ViewState, so if you want to store information in a user's session for later you need to create your own data object for storage (or store each field individually).
    • ViewState is not suitable for transferring data for back-end systems. That is, you still have to transfer your form data over to the back end using some form of data object. With a JavaBean, you simply transfer a reference to the JavaBean and let the back end extract whatever data it needs.

    Summary

    This article discusses solutions to a problem that will arise in almost every Web application: How to store form data during validation (in other words, how to persist form data between calls to the server).

    In JSP, JavaBeans are the most common solution. JavaBeans are special classes that follow a particular structure: for each field you want to store, you have a get and set method for retrieving and modifying a value in the bean. You can then invoke and access JavaBeans from within JSP code using special tags.

    In ASP.NET, the state of fields is stored in an encoded, hidden field called __VIEWSTATE. Addition of form data to __VIEWSTATE is done automatically by ASP.NET, and there is no need to manually create any auxiliary data objects. ViewState is extremely easy to use and always appropriate for simple persistence of data during validation. For more complicated data transfers, however, such as persisting data across different pages, you will need to create and use a data object.