Google Groups
Subscribe to Software Outsourcing [ Hire Dedicated Group ]
Email:
Visit this group

Tuesday, July 31, 2007

How your competitors can sabotage your website rankings

Your competitors also know that and and some of them might not use ethical business practices. There are some things that your competitors might do to sabotage your search engine rankings:

1. Your competitors might create spam under your name

All major search engines use links to calculate the ranking of web pages. It's not only the number of links that counts but also the quality.

Your competitor might add your website to several spam linking schemes to hurt your site.

In addition, your competitor might use your website URL for spamming in online forums, social network sites and blog comments. Although it's not you who is spamming the websites, it will be hard to prove that you're innocent and social network sites might ban your website (which will have a negative effect on the link structure of your site).

2. Your competitors might peach on you

Did you buy links on other websites to improve your search engine rankings? Google doesn't like that at all. If your competitor finds out that you use paid links he might tell Google and your rankings might drop.

The same can happen if you use any unethical SEO method on your website (hidden text, cloaking, etc.). If your competitor finds out and informs Google then it's likely that it will affect your search engine rankings.

3. Your competitor might send a copyright complaint

If a search engine has been notified about a copyright infringement on your website then the search engine must remove the page from its index for 10 days. If your competitor files a copyright complaint against you then your website can be temporarily removed from the search results.

4. Your competitor might create duplicate content

Search engines don't like duplicate content. If more than one web page has the same content then search engines will pick one page and drop the rest.

If your competitor creates duplicates of your web page content then these duplicates might get better rankings than your own site. Of course, this can cause legal problems for the person who duplicates the content (as all methods mentioned in this article).

You cannot avoid that unethical competitors spam other sites with your name but you can avoid being banned for using spam techniques on your web pages. Only use ethical search engine optimization methods to get high rankings on Google and other major search engines.

Source by free-seo-news.com

Saturday, July 28, 2007

Google’s Supplemental Index

The Big Daddy update of late 2005 to early 2006 was largely about installing a new Supplemental index. The new version is so different to the old version that it shouldn’t now be called the Supplemental index. The old Supplemental index was a repository for garbage webpages and such, and was accessed for the search results only when a reasonable number of results couldn’t be found in the regular index. The new version is very different because many millions of perfectly good pages are put in it.

Many, perhaps most, websites have plenty of their pages in the Supplemental index because their linkage profiles don’t score well enough. Even Google has pages in there - hundreds of thousand of them. A site’s linkage profile is an evaluation of the links into and out of the site. Things like linking to off-topic sites, and too high a percentage of a site’s inbound links being reciprocals, lowers the score of a site’s linkage profile, and reduces the number of pages that it can have in the Regular index, which means that more of its pages are placed in the Supplemental index. Improving the linkage profile brings pages out of the Supplemental index and into the Regular one.

Before Big Daddy, pages in the Supplemental index had been given the kiss of death - they rarely came out, and were rarely seen in the search results. But that has changed, and is continuing to change. It is now possible to bring pages out of the Supplemental index by getting some good links to the site, and the continued improvement is in the way that the Supplemental index is used by Google’s system.

Right now, most of the datacenters are using the new Supplemental index in the same way as the old one was used; i.e. get a results set from the Regular index and, if the set isn’t large enough, add to it from the Supplemental index. The quality of the results from the Regular index doesn’t come into it. If the results set is large enough, the Supplemental index is ignored.

But at least one datacenter operates differently. It operates along the lines of, get a results set from the Regular index. Sometimes many of those results will be poor quality matches (e.g. they only match one word of a three word query), so get some better matches from the Supplemental index. The use of the Supplemental index in a way something like this is likely to spread across the datacenters in 2007.

The new way makes a lot of sense. Since many of the results that are acquired from the Regular index are often poor matches for the query, and since millions of perfectly good pages are now stored in the Supplemental index, some of which will be good matches for many queries, it makes good sense to pull results from the Supplemental index when there are some poor matches from the Regular index.

It’s good news for website owners who have large numbers of pages in the Supplemental index. As the new way of operating spreads, more of their pages will rightly find their way into the search results, even though they are in the Supplemental index.

Source by www.webworkshop.net

Friday, July 27, 2007

12 Ways Webmasters Create Duplicate Content

At the start of this session, the search engines all talked about various types of duplicate content. But let’s take a deeper look at the way that duplicate content happens. Here are 12 ways people unintentionally create dupe content:

  1. Build a site for the sole purpose of promoting affiliate offers, and use the canned text supplied by the agency managing the affiliate program.
  2. Generate lots of pages with little unique text. Weak directory sites could be an example of this.
  3. Use a CMS that allows multiple URLs to refer to the same content. For example, do you have a dynamic site where http://www.yoursite.com/level1id/level2id pulls up the exact same content as http://www.yoursite.com/level2id? If so, you have duplicate content. This is made worse if your site actually refers to these pages using multiple methods. A surprising number of large sites do this.
  4. Use a CMS that resolves sub domains to your main domain. As with the prior point, a surprising number of large sites have this problem as well.
  5. Generate pages that differ only by simple word substitutions. The classic example of this is to generate pages for blue widgets for each state where the only difference between the pages is a simple word substitution (e.g. Alabama Blue Widgets, Arizona Blue Widgets, …).
  6. Forget to implement a canonical redirect. For example, not 301 redirecting http://yoursite.com to http://www.yoursite.com (or vice versa) for all the pages on your site. Regardless of which form you pick to be the preferred form of URL for your site, someone out there will link to the other form, so implementing the 301 redirect will eliminate that duplicate content problem for you, as well as consolidate all the page rank from your inbound links.
  7. Having your on site links back to your home page link to http://www.yoursite.com/index.html (or index.htm, or index.shtml, or …). Since most of the rest of the world will link to http://www.yoursite.com, you now have created duplicate content, and divided your page rank, if you have done this.
  8. Implement printer pages, but not using robots.txt to keep them from being crawled.
  9. Implement archive pages, but not using robots.txt to keep them from being crawled.
  10. Using Session ID parameters on your URLs. This means every time the crawler comes to your site it thinks it is seeing different pages.
  11. Implement parameters on your URLs for other tracking related purposes. One of the most popular is to implement an affiliate program. The search engine will see http://www.yoursite.com?affid=1234 as a duplicate of http://www.yoursite.com. This is made worse if you leave the “affid” on the URL throughout the user’s visit to your site. A better solution is to remove the ID when they arrive at the site, after storing the affiliate information in a cookie. Note that I have seen a case where an affiliate had a strong enough site that http://www.yoursite.com?affid=1234 started showing up in the search engines rather than http://www.yoursite.com (NOT good).
  12. Implement a site where parameters on URLs are ignored. If you, or someone else, links to your site with a parameter on the URL, it will look like dupe content.

There are many ways that people intentionally create duplicate content, by various scraping techniques, but there is no need to cover that here.

Source by stonetemple.com

Wednesday, July 25, 2007

ViewState and JavaBeans

ViewState and JavaBeans


Summary: Learn how ViewState in ASP.NET makes validation easier than using JavaBeans in JSP, by automatically persisting form data and returning it to the user if validation fails.

Introduction

Submission of a Web form is usually a two-stage process. The first stage is to validate the content of the form fields to ensure that it is valid and falls within the allowed limitations of the data structure. The second stage, submitting the form to an underlying application for processing, occurs only after validation has been successful. In this way, developers can be sure that the processing application will be called only once, and that it will always receive data it knows how to handle.

In most cases, validation is accomplished by having the form submit back to itself for validation. That is, if the form is on a page called Register.jsp, clicking the Submit button will send the form data to Register.jsp. Register.jsp will contain not only the HTML for the form itself, but also JavaScript code to examine each submitted field in the form and determine whether or not it is valid.

Similarly, in Microsoft® ASP.NET, all forms are posted back to the current .aspx page for validation. This process is called POSTBACK. When the page is requested for the first time, there is no additional information sent in the POST request and therefore the form appears blank. When the form is filled out and the Submit button is clicked, the same .aspx page is requested for a second time. This time, however, there are additional parameters included in the POST request (the values of the fields); the server recognizes this and performs validation on those parameters, forwarding to the appropriate page if they are all, indeed, valid.

But what happens if the form isn't valid? In both JSP and ASP.NET, we will want to redisplay the page for the user so that they can correct errors in invalid fields. However, we don't (usually) want the user to have to re-enter all the form data from scratch. So how do we choose to maintain data in some or all of the form fields, even after the page is reloaded?

In this article, we will discuss various ways of persisting form data after submission. We will examine the most commonly used techniques in both JSP and ASP.NET, and then look at how ASP.NET can be used to simplify the entire process, abstracting it almost completely into the background.

Means of Persisting Form Data

There are many ways of persisting form data after the form has been submitted. Some of the more popular ways include the following:

  • The values of form fields can be stored in the Session object. This is what we do in the CodeNotes Web site; each user of the site has a unique session ID that identifies him or her and allows user data to persist throughout a visit to the site. Data can be added to the Session object with a line like this (in ASP.NET):
·                Session("Name") = "Bob Jones";

Session information can be stored in various locations: inside the ASP.NET runtime process, inside a dedicated Microsoft Windows® service, or inside a Microsoft SQL Server™ database. However, using the Session object, in any of these locations, is costly in server memory. In addition, you have to read the values out of session and put them back into the form on each page load. This routine code bulks up your pages.

  • Cookies are another way of persisting data during (and between) user visits to a Web application. Unlike the Session object, cookies are stored on the individual user's machine, and are requested by the application itself each time the user visits. Cookies, however, take additional development time, and also require that all users have cookies enabled in their browsers—something that many people choose not to do for security reasons.
  • Another option to persisting data is to duplicate your form content in a hidden field that is posted back to the server. The server can then parse the hidden field and rewrite the page HTML by inserting the previously entered values. Hidden fields, like cookies and session storage, require additional "plumbing code" and can be difficult to maintain if the form changes even slightly.
  • One of the most popular methods of persisting form data in JSP is by using accessory data objects, such as JavaBeans. In the next section, we will discuss what JavaBeans are, how JavaBeans are used in a simple JSP application to persist form data, and look at an example of such an application.

JavaBeans and JSP

Although we store information in the Session object on codenotes.com, a more "proper" JSP alternative is to use JavaBeans. This involves designing a JavaBean class to represent the data structure of a form, and then accessing the bean when needed from a JSP page using a special syntax.

What are JavaBeans?

A JavaBean is a Java class that has member variables (properties) exposed via get and set methods. JavaBeans can be used for almost any purpose, from visual components to data elements. With regards to JSP, JavaBeans are generally data objects, and they follow some common conventions:

  • The Java class is named SomethingBean and should optionally implement the Serializable marker interface. This interface is important for beans that are attached to a Session object that must maintain state in a clustered environment, or if the JavaBean will be passed to an Enterprise JavaBean (EJB).
  • Each bean must have a constructor that has no arguments. Generally, the member variables are initialized in this constructor.
  • Bean properties consist of a member variable, plus at least one get method and/or a set method for the variable. Boolean values may use an is method instead of get (for example, isConfirmed()).
  • The member variables commonly have a lowercase first letter in the first word, with subsequent words capitalized (for example, firstName). The get and set methods are named getPropertyName and setPropertyName, where the property name matches the variable name (for example, getFirstName).
  • Get and set accessor methods often perform operations as simple as returning or setting the member variable, but can also perform complex validation, extract data from the database, or carry out any other task that acts on the member variables.

JavaBean Syntax

A simple JavaBean class might look something like Listing 1.

Listing 1. Simple JavaBean class (UserBean)

package com.codenotes.UserBean;
 
public class UserBean() {
   private String firstName;
   private String lastName;
 
   //default constructor
   public UserBean{
      this.firstName = "";
      this.lastName = "";
   }
 
   //get methods
   public String getFirstName() {return firstName;}
   public String getLastName() {return lastName;}
 
   //set methods
   public void setFirstName(String firstName) {
      this.firstName = firstName;
   }
 
   public void setLastName(String lastName) {
      this.lastName = lastName;
   }
}

This class has get and set methods for two fields: firstName and lastName. Notice that this class exactly follows the conventions listed previously.

To use the Bean from a JSP script, we need only add the code in Listing 2 to the top of the JSP.

Listing 2. Using UserBean

   id="UserBean"
   class="com.codenotes.UserBean"
   scope="session"/>
   property="*"/>

The element creates an instance of the UserBean class and assigns it session scope, which means it will remain available until the end of a user's entire session with your Web application. The element, in this case, populates the data structure of the JavaBean with the information in the Request object. Note that this will only work if the fields in the request exactly match fields in the JavaBean.

We can now access the data stored in the bean from anywhere in the JSP, no matter how many times it is reloaded, by using code like Listing 3.

Listing 3. Getting values from a JavaBean

The JSP processor automatically interprets getProperty and setProperty methods and calls the appropriate methods in the Bean class itself.

Server-side Validation Using JavaBeans

In an ordinary HTML page, the only way to validate user input is on the client side using JavaScript. However, client side validation can be problematic as it depends on the client's browser properly implementing your JavaScript code. In addition, a malicious user can easily download your page, make modifications to disable the JavaScript and work around your validation.

Using JavaBean tags, however, you can easily make a "validation bean" that will perform secure server-side validation on your data entry fields. Once you are sure that the data is valid, you can transfer it from this bean to any back-end system, such as a database or EJB. The validation bean thus becomes an intermediate step which helps secure your Web forms without requiring a significant modification to your middle tier or back end data systems.

ValidationBean

ValidationBean might look something like Listing 4.

Listing 4. A ValidationBean

package com.codenotes;
 
import java.util.Vector;
 
public class ValidationBean {
   private String m_email = "";
   private String m_name = "";
   private int m_age = 0;
 
   private Vector messages = new Vector();
 
   public ValidationBean() {
      m_email = "";
      m_name = "";
      m_age = 0;
   }
 
   public String getEmail() {return m_email;}
   public void setEmail(String email) {m_email = email;}
   public void isValidEmail() {
      //check for @ symbol somewhere in string
      if ((m_email.length() == 0) || (m_email.indexOf("@") <>
         messages.add("Enter a valid email.");
      }
   }
 
   public String getName() {return m_name;}
   public void setName(String name) {m_name = name;}
   public void isValidName() {
      //check if name exists
      if (m_name.length() <>
         messages.add("Name is required");
      }
   }
 
   public int getAge() {return m_age;}
   public void setAge(int age) {m_age = age;}
   public void isValidAge() {
      //must be at least 18 years old
      if (m_age <>
         messages.add("You must be 18 years old to register.");
      }
   }
 
   public String[] getMessages() {
      isValidName();
      isValidAge();
      isValidEmail();
      return (String[])messages.toArray(new String[0]);
   }
 
}

The code in Listing 4 contains some interesting features. First, you should note that every bean you make for use in a JSP should be assigned to a package. If you don't assign the bean to a package, most servlet containers will automatically assume it's part of the automatically created package when the JSP is compiled. This problem also occurs with custom tag handlers.

Second, although the isValidXXX() functions traditionally return a Boolean value, in our case we have chosen to simply add a message to our message vector instead. The isValidXXX() functions are meant to be called internally. From the JSP, we simply call getMessages() and check the length. If any messages are present, then some data is invalid.

Finally, if we wanted a more advanced sort of validation, we can easily expand the logic in each of the isValidXXXX() methods. For example, we could set the email field to be valid if it is missing, or it has the proper format (in other words, the field would be optional).

Using ValidationBean

The JSP code itself will be very similar to that discussed in the previous section. Listing 5 shows an example of a form using ValidationBean.

Listing 5. Form using ValidationBean

<% response.setDateHeader("Expires", 0); %>
 
   class="com.codenotes.ValidationBean">
   
      property="*"/>
 
    
      
      <% String[] messages = validBean.getMessages();
      if (messages != null && messages.length > 0) {
         %>
         Please change the following fields:
         
            <%for (int i=0; i <>
            out.println("
  • " + messages[i] +
  •             "");
                }%>
             
          <% } else {
             //Valid form!
             //transfer data from validation bean to memory
             session.setAttribute("name", validBean.getName());
     
     
             //then forward to next page
             %>
             page="CompleteForm.jsp" />
          <% } %>
     
          
             method="post">
             
                value='
                property="name"/>' />
             Name 
     
     
             
                value='
                property="age" />' />
             Age 
     
     
             
                value='
                property="email" />' />
             Email 
     
     
             
             />
     
     
          
       

    The JSP page in Listing 5 performs the following actions:

    1. Populates an instance of ValidationBean with data from the Request object.
    2. Queries the bean to see if it has any messages in its messages vector.
      • If there are error messages in the vector, the page displays a list of the error messages and redisplays the form using the data from the Bean to fill out the fields, where available.
      • If there are no error messages in the vector, the Bean is added to the user's current session, and the user is then forwarded to the next appropriate page.

    Note that we have to differentiate between single quotes and double quotes within each tag. If we don't switch quote types, the servlet container may become confused and parse the tags incorrectly.

    Using server-side validation offers many advantages over JavaScript and client-side code. Although you do have to write more boilerplate code in developing your JavaBeans, the resulting savings and simplicity in your JSP more than make up for it.

    JavaBeans Example

    As mentioned previously, the CodeNotes Web site uses the Session object instead of JavaBeans to persist form data, so there is no example of JavaBeans we can extract from the CodeNotes site. Instead, for the example in this article, we will design a simple Registration form similar to the one located at http://www.codenotes.com/login/registerAction.aspx, except that it uses JavaBeans instead of Session. In the next two major sections of this article, we will convert the example to ASP.NET using the Java Language Conversion Assistant (JLCA) to see how conversion of JavaBeans works, and then we will design a brand new Registration form in ASP.NET and show how new features in ASP.NET make persisting form data trivial.

    Note that to keep this example simple, we will not do any sort of validation on form data.

    UserBean

    UserBean is a straightforward JavaBean with basic getters and setters for the data in the form. The only thing to note is that we've put it in package codenotes. JavaBeans should always be placed in packages; otherwise, the servlet processor will assume they are in a default package and won't be able to find them there. Listing 6 shows the code for UserBean.java.

    Listing 6. UserBean.java

    package codenotes;
     
    public class UserBean implements java.io.Serializable {
     
       private String userName;
       private String password;
       private String firstName;
       private String lastName;
       private String displayName;
     
       public UserBean() {
          this.userName="";
     
          this.password="";
          this.firstName="";
          this.lastName="";
          this.displayName="";
       }
     
       public String getUserName() {return userName;}
       public String getPassword() {return password;}
       public String getFirstName() {return firstName;}
       public String getLastName() {return lastName;}
       public String getDisplayName() {return displayName;}
     
     
       public void setUserName(String userName) {
          this.userName=userName;
       }
     
       public void setPassword(String password) {
          this.password=password;
       }
     
       public void setFirstName(String firstName) {
          this.firstName=firstName;
       }
     
       public void setLastName(String lastName) {
          this.lastName=lastName;
       }
     
       public void setDisplayName(String displayName) {
          this.displayName=displayName;
       }
    }

    Register.htm

    Register.htm is a simple HTML file that will contain the form for the user to fill out. It contains no special tags or JavaScript code of any kind; it simply uses Register.jsp (described in the next section) as its target ACTION. We're also going to use HTTP GET instead of POST, so you can see the parameters in the URL. Listing 7 shows Register.htm.

    Listing 7. Register.htm

       
           

    Registration Form

           
              action="welcome.jsp" method="get" id="regForm"
              name="regForm">
              
                 
                    
                    
                 
                 
                    
                    
                 
                 
                    
                    
                 
                 
                    
                    
                 
                 
                    
                    
                 
                 
                    
                    
                 
                 
                    
                 
              
    UserName/Email:
    Password:
    Re-enter Password:
    FirstName:
    LastName:
    DisplayName:
                       
                    
           
       

    Welcome.jsp

    Finally, on Welcome.jsp we simply display the data that was entered into the form on the page. This is where we populate a UserBean instance with the results of the form submission, and then use tags to access the information from the Bean, when needed. Listing 8 shows Welcome.jsp.

    Listing 8. Register.jsp

       id="UserBean" class="codenotes.UserBean"
       scope="session" />
       name="UserBean" property="*" />
       
          

    Welcome!

          

    Your registration data:

          
             
                
                
             
             
                
                
             
             
                
     
                
             
             
                
     
                
             
             
                
     
                
             
          
    User Name:
                   property="userName" />
    Password:
                   property="password" />
    First Name:
                   property="firstName" />
    Last Name:
                   property="lastName" />
    Display Name:
                   property="displayName" />
       

    Converting JSP to ASP.NET by Using JLCA

    The Java Language Conversion Assistant (JLCA) converts a JavaBean class into a Microsoft® .NET class that implements System.Runtime.Serialization.ISerializable. For each pair of get and set methods in the original Bean class, JLCA creates a property (virtual public object) with get and set accessors. An ASP.NET page can then create an instance of the serializable class and access its properties through that instance.

    We'll convert the UserBean example from the previous section to an ASP.NET application. If you run the conversion wizard on the application found in the jlcademo.msi, you should get an almost "perfect" conversion, with no warnings or errors at all.

    JLCA will leave register.htm as is, and will convert UserBean.java to a C# file named UserBean.cs. Examining UserBean.cs, we can see how each of UserBean's get/set pairs has been converted to a property, like the one shown in Listing 9.

    Listing 9. A UserBean property

    virtual public System.String UserName
    {
       get
       {
          return userName;
       }
     
       set
       {
          this.userName = value;
       }
     
    }

    In order for this class to compile correctly, however, you will need to implement a method called getObjectData(), which is required of any class that implements the ISerializable interface. You don't need to write any code for this. Simply do the following:

    1. Switch to the Class View instead of the Solution Explorer.
    2. Expand beansConvcodenotesUserBeanBases and Interfaces.
    3. Right-click ISerializable, and then click AddImplement Interface.

    This will add the necessary implementation code for getObjectData() to your class, and conceal it within a region so you don't have to worry about it.

    The welcome.jsp file from the previous example converts perfectly into welcome.aspx, so you don't need to make any changes there. In fact, all you need to do now to get the application running is to right-click register.htm in the Solution Explorer, and then click Set as Start Page. After that, you can run the application and see that it is functionally identical to the JavaBean example from the previous section.

    One thing you may notice is that JLCA has added a significant amount of additional code to the beginning of welcome.aspx. This code does two things:

    1. Checks to see if the user's current session already contains an instance of UserBean. If it doesn't, it creates a new, empty UserBean.
    2. Populates the UserBean with values from the Request object, if there are any. Because UserBean.cs implements ISerializable, it is able to populate a collection representing the properties of the Bean and then cycle through them, adding the correct value from the Request object to the correct property.

    This code replaces the JSP container code that performed the same actions when a tag was encountered. As you can see, the JLCA does its best to ensure that your converted application remains as faithful as possible to the functionality of the original JSP code.

    The ASP.NET Alternative

    Instead of using an accessory data object like a JavaBean to store field data during validation, ASP.NET adds a special hidden field named __VIEWSTATE to the generated source for every form. This hidden field stores the state of all controls on the page, such as the text entered in a text box, whether checkboxes are checked or unchecked, the contents and selected items in list boxes, and so on. Therefore, there is no need for you to add additional code to persist field values each time a Submit button is clicked and validation is performed.

    Traditionally, many ASP applications used standard hidden fields to store field data between validation attempts. ViewState alleviates several problems with normal hidden fields, including:

    1. Normally, extra code was required to put field values into hidden fields for storage upon submission. ASP.NET does not require any extra code, as it automatically serializes the values of all fields on the page into a single __VIEWSTATE hidden field.
    2. The names and content of hidden fields are usually easily readable in the source code for an HTML form. The __VIEWSTATE field, on the other hand, is encoded using a complex hash scheme and is unreadable to humans. Only allowed applications will be able to decrypt the __VIEWSTATE field and extract values from its contents.
    3. You can specifically identify which controls should or shouldn't be included in the __VIEWSTATE field, and even assign a page level directive to disable ViewState if you choose. All of these settings are controlled with simple properties, and no coding is required.

    Although the use of the __VIEWSTATE field in ASP.NET may seem like an internal implementation detail of the Framework, it can tangibly influence the performance of your applications. Every time the server must update a page, the contents of the form on the page are actually sent to the client twice; first, as regular HTML, and then as encoded values in the __VIEWSTATE field. If an application has many controls that contain a lot of information, this hidden field can grow to sizable proportions. Because this field is transmitted as part of the response to a client in ASP.NET, it can adversely affect transmission time.

    Using ViewState

    By default, most controls in ASP.NET automatically have ViewState enabled. This means that you don't have to do anything special in order to have them persist data between validation attempts.

    Let's look at an example. We'll create a simple ASP.NET application that uses several common controls. As usual, we can simply drag components onto the Design View of the Web form. There's no need to add any additional code. In this case, we're going to add a text box, a calendar, and a button.

    If you select any one of the controls and examine its enableViewState property in the properties box, you will see that, by default, it is set to true. This means that the state of the control will be stored in the hidden __VIEWSTATE field, and will be persisted even after clicking the Go! button.

    Run the application. Type your name in the field and select a date on the calendar. Then click Go! Since we didn't add any code to the button, the form will simply post and refresh on the same page, but as you will see, the text you entered in the field and the date you selected on the calendar remain exactly as they were. You can examine the content of the __VIEWSTATE field by right-clicking on the page, and then clicking View Source. Somewhere near the top of the page, you'll see an element that looks like the one in Listing 10.

    Listing 10. A __VIEWSTATE example

      value="dDw3MDY2NzMxNDI7dDw7bDxp...7Pj47Phdg9e+N3tG/uHE9I7KBRj6NR9Oe"/>

    This element contains information on the state of each control in the page. You can watch it change, for example, by modifying the date you selected in the calendar.

    Disabling ViewState

    If, for some reason, you don't want an element to persist its state between posts, it is very easy to disable the ViewState on that element. Simply change the value of the enableViewState property for a particular element to "false" instead of "true." Try disabling ViewState on both the textbox and the calendar and then running the example application again. Type some text in the field and select a date as before, and then click Go!

    What happens? The calendar resets itself to its original state, and the date you selected is lost. Because you have configured your calendar not to store its content in __VIEWSTATE, the application will have no memory of any manipulation of that control after a postback to the server.

    However, you may have noticed that the text in your text box remained, even though you disabled ViewState for it as well. This is because certain simple control properties (like the text content of a text box) can be stored as basic text, and therefore ViewState is not necessary (and thus not implemented) to persist the values. ASP.NET simply obtains the value from the Request object instead. However, if you were to dynamically set a more complex property of the text box, such as its background color (for example, with a Change Color button), that change would be lost if ViewState were disabled on the text box.

    Note that you can also disable ViewState for an entire ASP.NET page by clicking on the form and changing the value of its enableViewState property to "false."

    enableViewStateMac

    If you looked at the enableViewState property of an ASP.NET form, you may also have noticed a property below it named enableViewStateMac. MAC stands for Machine Authentication Code. When this property is set to true, ASP.NET will include a machine authentication code in the __VIEWSTATE field. This prevents tampering, as it ensures that only the machine that encoded the __VIEWSTATE field in the first place can decode it and determine field values from it. There is generally no reason to turn this feature off.

    Advantages of Using ViewState

    The primary advantages of the ViewState feature in ASP.NET are:

    • Simplicity. There is no need to write or generate complex JavaBean classes in order to store form data between submissions. ASP.NET does everything for you automatically, and you can simply turn ViewState off if you don't want to use it for a particular control. Basically, persistence is all done in the background.
    • Flexibility. It's easy to configure ViewState on a control-by-control basis, so you can have certain fields maintain themselves so that the user does not have to re-enter them, and have other fields reset every time to ensure that the user enters them correctly. There is no need to ensure that the data submitted by your form fits a particular data structure, as the __VIEWSTATE field is encoded and decoded on the fly, with all the information in the correct location and order.

    Limitations of Using ViewState

    The primary limitations of ViewState are:

    • You can't transfer ViewState information from page to page. With JavaBeans, you can simply store the JavaBean in the session and then access it again from somewhere else. This is not possible with ViewState, so if you want to store information in a user's session for later you need to create your own data object for storage (or store each field individually).
    • ViewState is not suitable for transferring data for back-end systems. That is, you still have to transfer your form data over to the back end using some form of data object. With a JavaBean, you simply transfer a reference to the JavaBean and let the back end extract whatever data it needs.

    Summary

    This article discusses solutions to a problem that will arise in almost every Web application: How to store form data during validation (in other words, how to persist form data between calls to the server).

    In JSP, JavaBeans are the most common solution. JavaBeans are special classes that follow a particular structure: for each field you want to store, you have a get and set method for retrieving and modifying a value in the bean. You can then invoke and access JavaBeans from within JSP code using special tags.

    In ASP.NET, the state of fields is stored in an encoded, hidden field called __VIEWSTATE. Addition of form data to __VIEWSTATE is done automatically by ASP.NET, and there is no need to manually create any auxiliary data objects. ViewState is extremely easy to use and always appropriate for simple persistence of data during validation. For more complicated data transfers, however, such as persisting data across different pages, you will need to create and use a data object.

    Learn your self php

    PHP is a widely-used general-purpose scripting language that is especially suited for Web development and can be embedded into HTML.
    PHP provides full open source for users to make their own site.

    There are so many open sources available like Drupal, Jumala, OsCommerce, X-Cart,etc. This all are free lancing sources. The user simply down load these source setup them and use them.
    The open sources are such a good for community site, ecommerce site etc. We can use these all open sources to make such sites. The main benifite is that these open sources are free lancing and they provide basic platform for developer. The developer has to now just embed these open sources.
    There are some downloads provided below:

    Learn your self php

    Know about Drupal

    Now adays there are some frameworks are available for PHP. And those are as listed below.

    Cakephp

    Symphony

    Smarty

    Sanjay - By OffshoreSoftwareDevelopmentIndia.com
    Offshore Software Development India is a software solution provider based in Ahmedabad, India. We offer IT services and solutions in the areas of Web Designing, Web Developing, E-Accounting, E-Business, Enterprise Application Integration, On-site Consulting, Customised Application Development, Offshore software development and Outsourcing.

    Internet Marketing And Its Different Forms

    People use the virtual world to carry out real shopping. In fact, 71% of the Americans who were interviewed by the Pew Internet and American Life Project responded positively when asked whether they indulged in online shopping.

    Now this is good news for millions of goods manufacturers and service providers who are always keen on exploring new opportunities. Thus, Internet can open a floodgate of big chances.

    But there are various ways in which you can display your products and services. So, which form of Internet marketing is the most suitable for you? To answer this question, it is imperative for you to know about the various forms of Internet advertising that exists.

    Some of the popular forms include:

    · Contextual advertising: Contextual advertising is one of the most popular forms of search advertising. Contextual advertising entails the placement of ads on web pages, whose content match with the content displayed on the website. For instance, an ad on watches will get displayed on sites that feature watch reviews, watch care articles etc.

    · PPC [Pay-Per-Click]: Under, Pay-Per-Click, the URL of the advertiser gets displayed on the left, bottom or top of the search results. The URL gets listed only when the keywords typed in by the user matches with those selected by the advertiser.

    · Affiliate Marketing: Affiliate marketing refers to an advertising form wherein the advertisers enlist the services of other websites to market his product. Basically, the affiliate has to display a banner, products or URLs of the advertiser on its website. The commission can be paid on cost-per-click, cost-per-sale, cost-per-lead basis.

    · Rich Media Advertising: Rich Media Advertising refers to the use of streaming audio and video as well as applets for the promotion of the website. The audio or video placed on a web page offers interactivity to the viewers. It also enables the advertisers to communicate their message more effectively.

    · RSS: RSS stands for Really Simple Syndication. It is a web feed format that enables one to keep a tab on updated content. In other words, if a web surfer chooses a RSS option for a particular website, he will be able to keep abreast of the latest content that is being uploaded on a website. Advertisers often use RSS to convey the latest information about the launch of new products and services. RSS is also being popularly used by the advertisers to inform the customers about the latest discounts, bargains and offers.

    · Blogging: Blogging too can prove to be a great medium for advertising. Many corporate houses maintain blogs as it enables them to interact with the netizens in an informal environment. It also allows them to slip the information about the likely new launches in a rather stealth manner. Businesses are also exploring the option of sponsoring a blog, which shares its synergies with the products and services of the company. Many business entities are also going in for getting their ads displayed on blogs. However, experts believe that including advertorials is a better idea. Advertorials can quickly get into the flow of the topic that is being discussed on the blog. On the other hand, the inclusion of advertisements might not go down well with the blog readers.

    · Viral Marketing: ‘If one person knows about it, he will ensure that others come to know about it’ seems to be the working philosophy of viral marketing. If you are a regular visitor to the web, you will notice the ubiquitous ‘Tell a Friend’ below each article. Well, this is what viral marketing is all about. A friend reads an interesting offer, article or informative write-up and informs his friend. This friend then forwards, the same to his friend and the chain continues; thus, acting like a virus. Experts have noticed that the spread of information is faster if there are some freebies, discounts or special offers up for grabs.

    · Auto responder Marketing: Not many would be aware of the fact that auto responder can fulfill the purpose of marketing. But it can and in a very effective manner. Most business entities use auto responder to deliver courses through auto responders. Most of these courses are of long duration usually spread over weeks. Auto responders can be used to deliver the course every week. Companies have realized that the courses can be designed in a manner that the company can indirectly promote its product or service. For instance, if you are a manufacturer of a video editing software, you can use start a course titled ‘ Basics of video –editing’. One can also use auto responders for delivery of articles that are in a way connected with the core business of a website. For instance, if you are a cheese producer, and also have a website, then you can consider delivering recipes based on cheese. You can also considering adding a word or two about your new variety of cheese.

    · Search Engine Optimization: Search Engine Optimization is tweaking the site to enhance the rankings obtained by it on the search engines. It involves processes such as optimizing the content with a proper set of keywords or key phrases as well as making the site navigation structure much more friendly to users. It also includes taking small but vital steps such as getting linked to sites having higher page ranks, submission of sites to directories etc.

    · Podcasting: Podcasts can be used for the purpose of information dissemination. Podcast is a portmanteau of the word Ipod and broadcast. This form of Internet advertising entails the insertion of audio clips in the audio files that are available for listening. The biggest advantage of podcasting is that it allows one to target niche audience. For instance, if you are a guitar manufacturer, you can insert your ads into clips that focus on music, especially that focus on guitars.

    · Behavioral advertising: Behavioral advertising is one of the latest forms of Internet advertising. Here the ads that are visible to a web surfer are actually based on his previous surfing behavior. For instance, if a web surfer frequently visits sports section or sports site, then under behavioral advertising, he will mostly witness ads that are related to sports. So, even if he is surfing the net to buy that occasional pizza, he will be able to view sports ads. Thus, under behavioral advertising, the content of the site becomes irrelevant and the ads shown to a visitor are purely related to his past surfing habits. Behavioral advertising is proving to be very effective in reaching out to the customers as the ads are directly related to the interest of the readers.

    · Mobile marketing: Under mobile marketing, the target audience is usually sent a SMS or a MMS to inform about the latest product or services that might be of interest to them. The fact that more and more people are availing mobile services has definitely led to a spurt in mobile marketing.

    These are some of the most popular forms of Internet advertising.

    Source by isedb.com

    Award-winning Branding & SEO Guru Shares Expert Do-It-Yourself Tips with Small Businesses

    SEO Guru, Erin Ferree of elf design, inc., breaks the silence of the industry and reveals everything small businesses need to know to do their own SEO. Her new book, "Raise Your Ranking," provides everything a small business needs to know and do to take over their own search engine optimization, save thousands of dollars, and rise to the top of the search engine rankings. The best news: It's really easy.

    Belmont, CA (PRWEB) July 24, 2007 -- Small businesses spend thousands of dollars on website optimization to become visible to potential clients and customers through the search engines. Today award-winning branding and search engine optimization (SEO) guru, Erin Ferree, Principal, Elf Design, Inc. is releasing her top small business SEO secrets. This amazing release of closely-guarded information enables small businesses and entrepreneurs to use those secrets and do their own search engine optimization for a fraction of the usual cost. The first secret she reveals is that small business SEO is actually very easy.

    Visibility on the search engines is a matter of positioning in their search results listings. The better your position in the listing (rank), the more visible your company information becomes. Achieving high rankings is, quite simply, a critical piece of marketing strategy and a critical element of business success. "I think it is important for people in small businesses to know that with the right system, SEO is very easy," says Ferree. "There is really no reason companies can't learn the system, do their own SEO, and know how to re-optimize when necessary." Ferree believes this product gives small businesses control of their own future.

    After years of writing and designing websites for search engine optimization, Ferree has decided to focus her attention on her first passion --website and logo design. Yet she knows her SEO system can be quickly learned by small businesses. Ferree points out that optimizing a website is not the end of the story. As search engines change, as the words people search shift, and as a business grows, it becomes necessary to optimize the site again. If you understand how SEO works and how easily it can be done by a small business, you understand that you can save money, time, and control of your own future with a proven system like the one Ferree is offering.

    Her new product, Raise Your Ranking, is a complete guide to Search Engine Optimization. Carefully and clearly explained, this product will teach entrepreneurs and small business marketers how to approach SEO sensibly and cost-effectively. "There are other products on the market that claim to reveal all the secret techniques of site optimization. They promise high Internet traffic flow to your website. But they don't tell you how easy it is, nor do they take you beyond the basics." What is unique about Ferree's product is that it does tell readers how easy SEO can be and it does take them well beyond the basics. What is more, it explains why things work or don't work.

    Raise Your Ranking is intended to be a step-by-step guide for small businesses that will teach you everything you need to know, including the secret strategies Ferree uses for her clients, and how to do each step of the strategy yourself. One important element of her strategy is that you don't have to do everything and you don't have to achieve the number 1 ranking. Readers can simply follow her strategy to discover the most important things to do, how to do them well, and that the goal just needs to be reaching the top 10 in the rankings.

    For access to and information about Raise Your Ranking, please visit www.howtoraiseyourranking.com

    Ferree's track record clearly demonstrates the effectiveness of her strategy. Making this strategy available to small businesses will change the landscape of small business marketing. SEO for small businesses is now something every business can take in-house and use for business success. The revelation that SEO for small businesses is so easy anyone can do it certainly brings new opportunities for small businesses to control their own destiny.

    About Erin Ferree and Elf Design, Inc.:
    Elf design, founded by Erin Ferree, is a brand identity and graphic design firm that has been helping small businesses grow with bold, clean and effective logo designs for over a decade. Elf design offers the comprehensive graphic and web design services of a large agency, with the one-on-one, personalized attention of an independent design specialist. Elf design works closely with their clients to create designs that are visible, credible and memorable--and uniquely theirs. For more information about elf design, please visit: http://www.elf-design.com

    Tuesday, July 24, 2007

    Keep Your Website Search Friendly

    So you have a fantastic website but no one ever visits. You may ask? Why? Your website should be designed with search engines in mind. Too many web designers are graphic artists that excel at image manipulation, but lack a basic understanding of search engine optimization. Web design and search engine optimization (SEO) should not be mutually exclusive. Webmasters should have a clear understanding of both design techniques and how the search engines work.

    Incredible graphics on a website without any traffic, will do little to fill the coffers. Follow these basic guidelines to ensure that your website will be visited and attractive.

    1. Navigation

    Both humans and crawlers (search engines) need to be able to navigate your website. Avoid using technology that prohibits the search engine's ability to spider web pages. The majority of search engines have the ability to follow the links on a website if you use standard HTML. Observe normal convention and make the links obvious and available to all website visitors.

    2. Easy to Read

    Fonts should be legible and webmasters should utilize white space judiciously. Text on the web page should be easy to read.

    3. Speed

    Avoid using overly large graphics that are slow to load. Remember you only have mere seconds to capture the visitors attention, do not waste precious seconds with web pages that are slow to load. Search engines too will become impatient and give up on your website if it takes too long to view the content. As a result you should avoid using free hosting services that might be unreliable or slow to respond if you receive any web traffic surges.

    4. Consistency

    Your website should maintain a consistent look and feel. In other words, all the pages on the website should have a similar look, color scheme and navigation.

    5. Above the Fold

    The most important information on the your web page should appear above the "fold". This means that the website visitor should be able to view the most important content without having to use the scroll bar.

    6. Contact Information

    Include corporate contact information on the website. This lends your company credibility, anyone can pretend to be anyone or anything. Including contact information on a website shows that you are a serious and legitimate business entity.

    7. Avoid Javascript /Ajax

    Javascript and Ajax are cool, but they are not search friendly. It is best to stick with good old HTML. Search engines at this point are unable to spider the website contents that are displayed using Javascript. This is also true of websites that are dynamically updated with Ajax. Chances are the body of your website will help your website rank well; do not waste the search engine opportunity by using Javascript or Ajax.

    8. Meta Data Matters

    Each and every web page on the website should contain a unique title and description. Many search engines extract meta data from the website header and use it to classify and categorize the web pages listing. The web page title and description should relate to the webpage contents.

    9. Keywords Naturally

    Use website keywords and keyword phrases in the web copy in a natural way. Search engines are starting to discern unnatural text, machine generated content and content that is cobbled together by a bot. Write your website's content for humans not search engines.

    10. Web Page Focus

    Each web page on a website should focus on one or two keywords or keyword phrases, no more no less. The keywords should be incorporated into the meta tags, and web copy.

    An optimized website can bring search traffic and visitors who have a natural interest in your product or service. Optimization should be part of the design process. Before hiring a web designer make sure they understand both your design needs and search engine optimization.

    About the Author:

    Sharon Housley manages marketing for FeedForAll http://www.feedforall.com software for creating, editing, publishing RSS feeds and podcasts. In addition Sharon manages marketing for RecordForAll http://www.recordforall.com audio recording and editing software.

    Monday, July 23, 2007

    Google Offers Advice on Flash Web Sites & SEO

    A Google Groups thread explores whether Flash websites could be detrimental to search engine optimization efforts.

    Bergy of Google offers some feedback and addresses common misconceptions.

    For one, he goes into some of the concerns regarding hidden text. You should consider intent, and if you're hiding unrelated keywords, that's spam and should not be practiced. Flash developers often find themselves having to hide text behind flash animations, and as long as the same text is presented to the user and search engine, the rankings should not be affected.

    The goal of our guidelines against hidden text and cloaking are to ensure that a user gets the same information as the Googlebot. However, our definition of webspam is dependent on the webmaster's intent. For example, common sense tells us that not all hidden text means webspam--e.g. hidden DIV tags for drop-down menus are probably not webspam, whereas hidden DIVs stuffed full of unrelated keywords are more likely to indicate webspam.

    Bergy also reviews the webmaster's site and offers important tips regarding Googlebot's crawlability for specific links. I thought that this is important to share as well:

    Googlebot deals with #anchors differently than ?arguments. Googlebot treats ?arguments as strict part of the URL string, but ignores #anchors, since in normal HTML, they all point to the same page...

    Therefore, if you have a page that is www.example.com/index.html#aboutus, Google is really only sending PageRank to the index.html page, not the #aboutus anchor.

    Source by www.seroundtable.com

    Google Offers Advice on Flash Web Sites & SEO

    A Google Groups thread explores whether Flash websites could be detrimental to search engine optimization efforts.

    Bergy of Google offers some feedback and addresses common misconceptions.

    For one, he goes into some of the concerns regarding hidden text. You should consider intent, and if you're hiding unrelated keywords, that's spam and should not be practiced. Flash developers often find themselves having to hide text behind flash animations, and as long as the same text is presented to the user and search engine, the rankings should not be affected.

    The goal of our guidelines against hidden text and cloaking are to ensure that a user gets the same information as the Googlebot. However, our definition of webspam is dependent on the webmaster's intent. For example, common sense tells us that not all hidden text means webspam--e.g. hidden DIV tags for drop-down menus are probably not webspam, whereas hidden DIVs stuffed full of unrelated keywords are more likely to indicate webspam.

    Bergy also reviews the webmaster's site and offers important tips regarding Googlebot's crawlability for specific links. I thought that this is important to share as well:

    Googlebot deals with #anchors differently than ?arguments. Googlebot treats ?arguments as strict part of the URL string, but ignores #anchors, since in normal HTML, they all point to the same page...

    Therefore, if you have a page that is www.example.com/index.html#aboutus, Google is really only sending PageRank to the index.html page, not the #aboutus anchor.

    Source by www.seroundtable.com

    The New Rules of Marketing and PR (and SEO)

    By Mike Grehan www.clickz.com

    Google's steady rollout of universal search continues to fascinate and thrill me. It's the most exciting thing that's happened in search to date. The opportunities to make all manner of file types and media available to Google so it can provide the end searcher with a much richer experience in a combination of results is a major step forward.

    In the not-too-distant future, however, it could cause a serious conflict for marketers. Increasingly, I see these multimedia results for queries appearing in the organic results. So what happens when I'm running a PPC (define) campaign, bidding on a keyword costing me anything from $0.25 to $25 a click on the right side of the page, while my competitor's video, podcast, blog images, and the like appear for free on the left?

    I can't imagine any advertiser being thrilled about paying for a little blue box with a tiny amount of text, while his competitor has his promotional video pushing everything else on the page below the fold while it's playing right in the middle of the SERP (define).

    A search for "dove beauty workshop" on Google brings the immensely popular Dove Evolution advert right to the top three results, just below Dove's own Campaign for Real Beauty site.

    Click that video result and Dove entirely owns the organic side of the page. And with total views now somewhere in the millions, you could say it's a pretty popular result (if you haven't seen it, you must; it really is a thought-provoking ad).

    In June, I saw a presentation by a Yahoo rep about the U.K. Panama rollout. I raised an eyebrow when I heard him say something along the lines of, "We could tie other results together with our paid results, such as video, news results, and other media." This was the same week Google announced universal search, by the way.

    And I seem to remember a quote from Google's Marissa Mayer about how she thought Google's ads were also good content for the end searcher about the same time she announced universal search, though I'm a little hazy about that. (If you search for "marissa" on Google, you'll see a row of images of her at the top of the pile. Yes, she has her own universal result. Anyone know what's going on in the last pic?)

    All this had me thinking it would be a logical move for search engines to actually switch the results around and have paid where organic used to be and vice versa. Crazy? Maybe.

    But do a search at Google for "bourne ultimatum," and tell me that first paid result (clearly marked as a Google Promotion) is not on the left side.

    There's a statistic I got from somewhere some time ago: only about 20 percent of searches are commercial. Yes, something like 80 percent of searches are informational/research type searches, such as this one for the "history of cookies." There's not a single ad in sight. You could surmise that those people making commercial searches are very happy to see commercial results. Perhaps even happier if they see them tied to video, blogs, podcasts, news, local, stock quotes, images, and the like.

    And you could also surmise that people making those commercial/transactional type searches wouldn't be at all bothered if the usual list of 10 blue organic links appeared on the right side of the page.

    Some of this may seem to SEO (define) purists a little as if I've going all heretic again (or whatever my detractors usually say). But it's not such a crazy notion as you may think. Or is it?

    Maybe the rules could be about to change again, quite dramatically. Who knows?

    I have a voracious appetite for books. And I'm very fortunate that I get many books sent to me for review, both on marketing and information retrieval. However, when you have up to 12 books at a time waiting to be reviewed, it's very difficult to know which to choose first.

    One book that's been sitting on my desk waiting for my attention is David Meerman Scott's "The New Rules of Marketing and PR." And I feel very guilty for not picking it up sooner. For any search marketer scratching her head about how to deal with Google's universal search and anything else the search engines throw at us, this is the book for you.

    With a forward by Robert Scoble, it's packed full of advice on how to use news releases, blogs, podcasting, and online media to reach buyers. I read it cover to cover. Somewhere in the middle, Scott declares (in case you hadn't figured it by then) that it's actually a book about search engine marketing.

    If you're going to enter the arena of universal search, make sure you're armed with this book. In fact, I don't recommend that all search marketers read it. I almost insist.

    Oh, and thanks to Jeff and Bryan Eisenberg for getting a mention and a link in the book for myself, Search Engine Round Table, and Crea8pc Usability.

    Saturday, July 21, 2007

    The PHP.net Google Summer of Code

    Some Good News for PHP Community Are As Follow:


    The PHP team is once again proud to participate in the Google Summer of Code. Seven students will "flip bits instead of burgers" this summer:

    • Mentored by Michael Wallner, Hannes Magnusson will work on LiveDocs, which is a "tool to display DocBook XML files in a web browser on the fly, without the need of building all HTML target files first". This project will be of great value to the PHP Documentation Team.
    • The PHP Interpreter uses reference counting to keep track of which objects are no longer referenced and thus can be destroyed. A major weakness in the current implementation is that it cannot detect reference cycles, that is objects that reference each other in a circular graph structure which is not referenced itself from outside the circle. Mentored by Derick Rethans, David Wang will implement a new reference counting algorithm that will alleviate this problem.
    • Xdebug provides a range of useful functionality for PHP developers, including detailed error information, code coverage and profiling support, and support for remote debugging using the GDB and DBGp protocols. Mentored by Xdebug's creator, Derick Rethans, Adam Harvey will develop a cross-platform GUI application that implements the DBGp protocol and allows PHP applications to be debugged using Xdebug in a development environment agnostic fashion.
    • Mentored by Lukas Smith, Konsta Vesterinen will work on the object-relational mapper Doctrine.
    • Mutation Testing, or Automated Error Seeding, is an approach where the testing tool makes some change to the tested code, runs the tests, and if the tests pass displays a message saying what it changed. This approach is different than code coverage analysis, because it can find code that is executed by the running of tests but not actually tested. Mentored by Sebastian Bergmann, Mike Lewis will implement Mutation Testing for PHPUnit.
    • Mentored by Helgi Þormar Þorbjörnsson, Igor Feghali will add support for foreign keys to MDB2_Schema, a package that "enables users to maintain RDBMS independant schema files in XML that can be used to create, alter and drop database entities and insert data into a database".
    • Mentored by David Coallier, Nicolas Bérard-Nault will refactor the internals of Jaws, a Framework and Content Management System for building dynamic web sites, for PHP 6.
    Source: php

    Friday, July 20, 2007

    PHP 6 Overview

    PHP 6, it seems, will make the leap and become a more clean environment - which is something I really appreciate.

    The register_globals, magic_quotes and safe_mode will finally disappear and hopefully slowly fade away into distant memory. It seems PHP 6 will even refuse to start if these settings are found in php.ini. Dropping support for the long versions of super globals, like HTTP_POST_VARS, is also scheduled. This is long overdue.

    One thing I at first was a bit hesitant to is moving all the database extensions out of the core into PECL. It looks like this is not set in stone and seems to be an ongoing discussion. After thinking about it myself I think it would be the right thing to do. It would boost the usage of PDO and make it more used and thus more mature. It would also clear up some of the confusion among newcomers in the PHP sphere. "Use PDO or make an active choice" - would probably be the best for the future of PHP.

    SOAP is widely used today and good support for SOAP has been around as an extension you have to actively turn on if you needed it. It also has many limitations as has been discussed recently on the PECL-DEV mailing list. There is also a new PHP SOAP extension being developed that is using Apache Axis 2 from the Apache Foundation. I think what is suggested for PHP 6, to fix most of the remaining issues as well as implement support for some of the security extensions of SOAP, is the right way to go. I really think SOAP needs to be natively supported in PHP rather than having to depend on an external library as is the case with the Axis2 extension.

    Named parameters to functions and methods has also been discussed. Even though I can remember how I enjoyed writing Smalltalk code with named parameters more than 10 years ago in university I don't think it should be implemented in PHP. Luckily those that decide on the roadmap thought the same as me.

    Something that really annoys me today is that you can call methods both statically and dynamically whether they are marked static or not. It just doesn't make sense. This will in the yet distant PHP 6 generate an E_FATAL. Now that makes sense.

    This above is all well but it is in the planned additions it gets really interesting. PHP 6 will have the opcode cache APC included in the core distribution. It will not be turned on by default - but I think this is the first small step towards a future with JIT compilation or something along those lines. Good UTF support is another thing that is sorely needed and it seems a lot of work will be directed to clean up the string handling in PHP 6.

    Source: dotdavid.com

    Drupal 7 and PHP

    Article by PHP Open Source - PHP provides open source for the development.

    ----------------------

    Drupal has long prided itself for staying ahead of the curve technologically. In order to be able to write the best quality Drupal software, Drupal developers need the best programming tools available. Today, the best PHP available is PHP 5.

    PHP 5 has been deployed and tested in production environments for three years. Unfortunately, web hosts have been slow to adopt PHP 5, which has made it difficult for Drupal and many other PHP projects to fully embrace PHP 5's features.

    Now a growing consortium of PHP projects have joined together and push for wider PHP 5 adoption. By all embracing PHP 5 together, the projects involved in the GoPHP 5 effort are sending a message to web hosts that it is time to embrace PHP's future.

    Drupal is now part of that movement.

    Drupal is such a powerfull opensource. It will be embaded now with php5. This new verson called Drupal 7 .

    Drupal 7 is compitable with php5 or higher.

    It was announced at drupal.org today, that Drupal 7 will be PHP 4 incompatible. This is a huge decision to take, since it will make it impossible for many to run Drupal 7.

    By Offshore Software Development India - Php Development India
    OffshoreSoftwareDevelopmentIndia.com

    Search Engine Optimization - now you can check your website status

    Search Engine Optimization Service, Now u can chaek you website popularity and site ranking is only ONE way to check your progress. Mentioned Here are some of useful Online tools to check your overall website branding.

    Check alexa ranking - www.alexa.com
    Alexa ranking is skewed in the sense that it is only restricted to people that have downloaded the alexa toolbar.
    You can compare your blog’s popularity with three other websites

    Check blog shares value - www.blogshares.com
    Blogshares is a fantasy market where every blog is equivalent to a stock. Your blog may already be listed there. Check out the value there.

    With out google toolbar check page rank for your site - pr.blogflux.com
    If the site ranking is less than six, you got some work to do

    Check Domain / Website Value - dnScoop.com attempts to estimate a value for an established website or a domain name by using factors such as: Links pointing to the domain, Popularity of the domain, Age of the domain, Pagerank of the domain, Traffic to the domain and Overall branding value.


    By OffshoreSoftwareDevelopmentIndia.com
    Offshore Software Development India is a software solution provider based in Ahmedabad, India. We offer IT services and solutions in the areas of Web Designing, Web Developing, E-Accounting, E-Business, Enterprise Application Integration, On-site Consulting, Customised Application Development, Offshore software development and Outsourcing.

    Tuesday, July 17, 2007

    Features of Web 3.0

    Just in case you missed it, the web now has version numbers. Nearly three years ago, amid continued hand-wringing over the dot-com crash, a man named Dale Dougherty dreamed up something called Web 2.0, and the idea soon took on a life of its own. In the beginning, it was little more than a rallying cry, a belief that the Internet would rise again. But as Dougherty's O'Reilly Media put together the first Web 2.0 Conference in late 2005, the term seemed to trumpet a particular kind of online revolution, a World Wide Web of the people.

    Web 2.0 came to describe almost any site, service, or technology that promoted sharing and collaboration right down to the Net's grass roots. That includes blogs and wikis, tags and RSS feeds, del.icio.us and Flickr, MySpace and YouTube. Because the concept blankets so many disparate ideas, some have questioned how meaningful—and how useful—it really is, but there's little doubt it owns a spot in our collective consciousness. Whether or not it makes sense, we now break the history of the Web into two distinct stages: Today we have Web 2.0, and before that there was Web 1.0.

    Which raises the question: What will Web 3.0 look like?

    Yes, it's too early to say for sure. In many ways, even Web 2.0 is a work in progress. But it goes without saying that new Net technologies are always under development—inside universities, think tanks, and big corporations, as much as Silicon Valley start-ups—and blogs are already abuzz with talk of the Web's next generation.

    To many, Web 3.0 is something called the Semantic Web, a term coined by Tim Berners-Lee, the man who invented the (first) World Wide Web. In essence, the Semantic Web is a place where machines can read Web pages much as we humans read them, a place where search engines and software agents can better troll the Net and find what we're looking for. "It's a set of standards that turns the Web into one big database," says Nova Spivack, CEO of Radar Networks, one of the leading voices of this new-age Internet.

    But some are skeptical about whether the Semantic Web—or at least, Berners-Lee's view of it—will actually take hold. They point to other technologies capable of reinventing the online world as we know it, from 3D virtual worlds to Web-connected bathroom mirrors. Web 3.0 could mean many things, and for Netheads, every single one is a breathtaking proposition.

    Source

    Offshore Software Development India (OSDI) venturing abroad.

    After last year's success in Shipping Portal with shipping-exchange.com and Blog site of blogfreehere.com the IT Director of Offshore Software Development India (OSDI) visited Scotland, England and Wales recently for new projects. Offering a wide range of skills in IT Services. Mainly focusing in Business Process Outsourcing (BPO), Software Development, IT Consultancy, Web Designing / Web Development, Offshore Outsourcing, Multimedia, Customized Software Applications and Search Engine Optimization (SEO). Returning back last weekend back to Ahmedabad from UK the IT Director said to the staff we have new challenges to meet and much to deliver to the UK based customers.

    Technology is a wide arena, like our outer space in the galaxy. Enormous potential lies and just companies like us, Offshore Software Development India (OSDI), and clients like you can explore its zenith. Since long the days of Stand-alone PC have gone and the WWW or World Wide Web or Internet as we all know, has conquered every PC, Server, Mobile and Laptop we use. Plenty to explore from Web Pages, Online News which use RSS feeds, Podcast and amazing YouTube.com has changed everything as far as online video is concerned. Imagine your website being one of millions website that sits on the .NET waiting to be explored. The hits on your website count. They generate business and inquiries. They would give you business and turnover which you have been waiting for. Offshore Software Development India (OSDI) can help you in your Website to succeed in this highly competitive market. Search Engine Optimisation is the way ahead.

    Our Search Engine Optimisation service is the perfect fusion of linguistic skills, technical know-how and market sector research, blessed with a keen eye to customer needs. A business is only successful if the ROI is good. Hence we always say - "Deliver with Difference to Succeed" We are confident handling major technological brands. We understand what your brand means to you. We can work successfully with complex sites and diverse needs. We try to go that extra mile just for you.


    We provide value for your money. Our Search Engine Optimisation service includes Keyword analysis, Solutions for non-search engine compatible sites, Competitor analysis, On-page optimisation, Deep site optimisation, Valuable link building, Brand protective approach, Solutions for catalogue sites, Fast, manual search engine submission, Optimisation for a re-brand, Position reporting and Page ranking improvement. We try to compete with the International Market of the Internet.


    We are humble professionals so please ignore us if we are always on about your business needs. Our core expertise lie in few more areas of the IT sector like Web Development, Web Designing, Outsourcing Offshore Software Application Development, .Net Development using Microsoft, Shopping Cart/ E-store Development, Customer Relations Management Portal Development, Ecommerce Application Development, Auction Websites and Portals, Commerce Server based solutions, Content Management Systems (CMS) on the Web. Open Source is equally competitive solution we believe in Open Source Development. Services offered in this area could meet your needs for Web Applications / Application Re-designing, PHP Development (exploring the open source to enhance your business in turn lowering your maintenance cost), Joomla and Drupal based solutions which explore the Content Management Framework (CMF) Based Web solutions.


    http://www.offshoresoftwaredevelopmentindia.com


    We just like to share and update the world on our actions hence if you feel like trying us out why not contact us on info@offshoresoftwaredevelopmentindia.com or just call us on +91-79-65457841. We like to talk about your business needs and it is free

    The 7 Essential Title Tag Strategies of High Ranking WebPages in 2007

    Perhaps you remember the days when cutting-edge webpage design boasted animated gifs and focused on keyword density for top search engine rankings. These days, however, standard fare often combines flash animation with a heavy incoming link campaign. But through all the changes, one element remains constant—the importance of the HTML title tag. This little tag was, and still is, the single most important onpage element of high ranking webpages.

    To lend perspective, let’s wander back for a moment to the late 90’s when all this SEO work really got started. The title tag was, to put it mildly, tantamount to success. At that time the immensely popular, but now-defunct, Infoseek search engine bestowed top rankings on pages with the highest number of keyword repetitions within the title. This foremost strategy, combined with page freshness, was key. Bear in mind that, at the time, Infoseek was king and Google didn’t even exist!

    Many an SEO worked around the clock constantly reformatting and resubmitting pages to see what they could, frankly, get away with before Infoseek would finally ban the domain. In many cases the SEO would then just begin anew the whole trial-&-error, push-the-limits process with a new domain. Personally, I remember submitting pages with over 100k worth of text in the title tag—and then sat back and basked in the glow of success as my pages rocketed straight to the top in a mere 5 minutes after submitting them. Boy, was that fun!

    Alas, such a simplistic approach to SEO didn’t last too long; the engines evolved, got much smarter and in turn, SEO work has proportionately increased in difficulty. But one thing that hasn’t changed, regardless of which search engine you’re targeting, is the importance of getting your title tags right. By the way, just to be sure we’re on the same page, a title tag looks like this…

    Your Keywords Go Here

    Title tags 2007

    Today the title tag remains a critical component of top scoring webpages. While it’s true that inbound links can cause a webpage to rank very well even if the keyword is missing from the body of the page, you’ll seldom find a page without the keyword in the title tag that ranks highly for a competitive search.

    These days, there persists both myths and confusion about the role the title tag actually plays within the ranking formulas. So, for that reason, let’s take a fresh look at what actually is helping pages score well in the year 2007.

    Inside the membership area of SearchEngineNews.com, the rest of this article talks about how to optimize:

    The 7 Essential Title Tag Strategies of Today’s High Ranking WebPages

    Now that you know how important the title tag really is, you’ll want to incorporate these top seven strategies to allow your titles to work at maximum power, search-engine-wise…

    1. Length of Your Title: When creating titles for your webpages, remember that anything more than…

    2. Word Proximity: Search engines actually do pay attention to the distance between words for multiple keyword searches. For example, in a search for … a webpage title tag that contains … will typically hold a ranking advantage over another webpage with a title tag such as …

    As for punctuation…

    3. Keyword Location: As a general rule, the closer you place your keyword to the … the better the ranking advantage. However, bear in mind that we’ve seen … you can expect better results by placing your keywords …

    4. Word Order: Consider the search dell computers. This will generate far different results than a search for … The search engines do pay attention to … so be sure to position them in the most likely order that …

    However, be aware of the opportunities that …

    5. Repetitions: Should you use the keyword more than once in the title? The answer is…

    6. Titles for Human Consumption: There is one enduring constant of title tag content creation that must remain a top priority…

    7. What Words to Use: By now it should be obvious that you should … We are still seeing many, many web sites that … And, that’s a huge mistake.

    Now, if your site is guilty of committing this error, then you should probably jump up and down for joy! …Why? Because…

    Remember that it isn’t difficult to …

    And also be aware that your SE-knowledgable competitors will be rolling on the floor laughing if they ever see … as your webpage title within the search results—a mistake caused by neglecting …

    How Each Specific Major Search Engine Utilizes the Title Tag

    Considering how important the title tag is to your ranking success, let’s focus on the top three engines and break down exactly what they’re responding to in terms of high ranking title tags…

    Google — Believe it or not, we’ve recorded Google indexing up to …

    Also bear in mind that Google does not respond to …

    Source by creeper-seo.com