Search This Blog

Thursday, December 2, 2010

CRON

we can use any of these commands though, you first need to understand the crontab itself. Each line in a crontab needs to specify five time fields in the following order: the minutes (0-59), hours (0-23), days of the month (1-31), months (1-12), and days of the week (0-7, Monday is 1, Sunday is 0 and 7). The days of the weeks and months can be specified by three-letter abbreviations like mon, tue, jan, feb, etc. Each field can also specify a range of values (e.g. 1-5 or mon-fri), a comma separated list of values (e.g. 1,2,3 or mon,tue,wed) or a range of values with a step(e.g. 1-6/2 as 1,3,5).
That sounds a little confusing, but with a few examples, you will see that it is not as complicated as it sounds.
Code Listing 3.8: Examples
# Run /bin/false every minute year round
*     *     *     *     *        /bin/false

# Run /bin/false at 1:35 on the mon,tue,wed and the 4th of every month




35    1     4     *     mon-wed  /bin/false

# Run /bin/true at 22:25 on the 2nd of March
25    22    2     3     *        /bin/true

# Run /bin/false at 2:00 every Monday, Wednesday and Friday




0     2     *     *     1-5/2    /bin/false
Note: Notice how you have to specify specific days of the week and days of the month before they are combined. If you have * for only one of them, the other takes precedence, while * for both just means every day.
To test what we have just learned, let's go through the steps of actually inputting a few cron-jobs. First, create a file called crons.cron and make it look like the this:
Code Listing 3.9: Editing crons.cron
$ nano crons.cron
#Mins  Hours  Days   Months  Day of the week
10     3      1      1       *       /bin/echo "I don't really like cron"




30     16     *      1,2     *       /bin/echo "I like cron a little"
*      *      *      1-12/2  *       /bin/echo "I really like cron"
Now we can add that crontab to the system with the "new command" from the table above. 

4.  Using cronbase
As mentioned earlier, all of the available cron packages depend on sys-process/cronbase. The cronbase package creates /etc/cron.{hourly,daily,weekly,monthly}, and a script called run-crons. You might have noticed that the default/etc/crontab contains something like this:
Code Listing 4.1: Default system crontab
*/15 * * * *     test -x /usr/sbin/run-crons && /usr/sbin/run-crons
0  *  * * *      rm -f /var/spool/cron/lastrun/cron.hourly
0  3  * * *      rm -f /var/spool/cron/lastrun/cron.daily




15 4  * * 6      rm -f /var/spool/cron/lastrun/cron.weekly
30 5  1 * *      rm -f /var/spool/cron/lastrun/cron.monthly
To avoid going into much detail, we can just assume that these commands will effectively run your hourly, daily, weekly and monthly scripts. This method of scheduling cron-jobs has some important advantages:
  • They will run even if your computer was off when they were scheduled to run
  • It is easy for package maintainers to place scripts in those well defined places
  • You know exactly where your cron-jobs and your crontab are stored, making it easy for you to backup and restore this part of your system
5.  Final Notes
If you're having problems getting cron to work properly, you might want to go through this quick checklist.
  • Is cron running? Run ps ax | grep cron and make sure it shows up!
  • Is cron working? Try: * * * * * /bin/echo "foobar" >> /file_you_own and make sure it works
  • Is your command working? Try: * * * * * /bin/foobar > /file_you_own 2>&1 and look for errors in /file_you_own
  • Can cron run your job? Check the cron log, usually /var/log/cron.log or /var/log/messages for errors
  • Are there any dead.letters? cron usually sends mail when there's a problem; check your mail and also look for ~/dead.letter.
Remember, each cron package is different and the range of features varies greatly. Be sure to consult the man pages for crontab, fcrontab or anacrontab, depending on what you use.

How to compile and install a new Linux kernel

Be especially cautious when messing around with the kernel. Back up all of your files, and have a working bootable recovery floppy disk or CD-ROM nearby. Learn how to install a kernel on a system that doesn't matter. You've been warned. This is obviously a very short guide; only use in conjunction with a more thorough guide such as The Linux Kernel HOWTO
1. Download the latest kernel from kernel.orgThe kernel comes as a 20 to 30 MB tar.gz or tar.bz2 file. It will decompress to about 200 MB and during the later compilation you will need additional space.
Example:
wget http://www.kernel.org/pub/linux/kernel/v2.4/linux-2.4.19.tar.gz
tar zxvf linux-2.4.19.tar.gz
cd linux-2.4.19

2. Configure the kernel optionsThis is where you select all the features you want to compile into the kernel (e.g. SCSI support, sound support, networking, etc.)
make menuconfig
* There are different ways to configure what you want compiled into the kernel; if you have an existing configuration from an older kernel, copy the old .config file to the top level of your source and use make oldconfig instead of menuconfig. This oldconfig process will carry over your previous settings, and prompt you if there are new features not covered by your earlier .config file. This is the best way to 'upgrade' your kernel, especially among relatively close version numbers. Another possibility is make xconfig for a graphical version of menuconfig, if you are running X.
3. Make dependenciesAfter saving your configuration above (it is stored in the ".config" file) you have to build the dependencies for your chosen configuration. This takes about 5 minutes on a 500 MHz system.
make dep

4. Make the kernelYou can now compile the actual kernel. This can take about 15 minutes to complete on a 500 MHz system.
make bzImage
The resulting kernel file is "arch/i386/boot/bzImage"
5. Make the modulesModules are parts of the kernel that are loaded on the fly, as they are needed. They are stored in individual files (e.g. ext3.o). The more modules you have, the longer this will take to compile:
make modules

6. Install the modulesThis will copy all the modules to a new directory, "/lib/modules/a.b.c" where a.b.c is the kernel version
make modules_install

* In case you want to re-compile...If you want to re-configure the kernel from scratch and re-compile it, you must also issue a couple "make" commands that clean intermediate files. Note that "make mrproper" deletes your .config file. The complete process is:
make mrproper
make menuconfig
make dep
make clean
make bzImage
make modules
make modules_install

* Installing and booting the new kernelFor the remainder of this discussion, I will assume that you have LILO installed on your boot sector. Throughout this process, always have a working bootable recovery floppy disk, andmake backups of any files you modify or replace. A good trick is to name all new files with -a.b.c (kernel version suffix) instead of overwriting files with the same name, although this is not shown in the example that follows.
On most Linux systems, the kernels are stored in the /boot directory. Copy your new kernel to that location and give it a unique name.
Example:
cp arch/i386/boot/bzImage /boot/vmlinuz-2.4.19
There is also a file called "System.map" that must be copied to the same boot directory.
cp System.map /boot
Now you are ready to tell LILO about your new kernel. Edit "/etc/lilo.conf" as per your specific needs. Typically, your new entry in the .conf file will look like this:
image = /boot/vmlinuz-2.4.19
  label = "Linux 2.4.19"
Make sure the image points to your new kernel. It is recommended you keep your previous kernel in the file; this way, if the new kernel fails to boot you can still select the old kernel from the lilo prompt.
Tell lilo to read the changes and modify your boot sector:
lilo -v
Read the output carefully to make sure the kernel files have been found and the changes have been made. You can now reboot.
Summary of important files created during kernel build:
.config (kernel configuration options, for future reference)
arch/i386/boot/bzImage (actual kernel, copy to /boot/vmlinuz-a.b.c)
System.map (map file, copy to /boot/System.map)
/lib/modules/a.b.c (kernel modules)

ACID

On Databases

Databases come in many forms. The simplest definition of a database is any system of storing, organizing, and retrieving data. With this definition, things like memory, hard drives, file systems, files on those file systems (stored in plain text, tab-delimited, XML, JSON, or even BDB formats), and even applications like MySQL, PostgreSQL, and Oracle are considered databases.
Databases allow users to:
  • Store Data
  • Organize Data
  • Retrieve Data
It is important to keep a broad perspective on what data and databases really are so that you can always choose the best solution for your particular problem.
The SQL databases (MySQL, PostgreSQL, Oracle, and others) are remarkable because of the flexibility and performance they provide. In my work, I look to them first when developing an application, with an eye towards getting the data model right before optimization. Once the application is solid, and once I fully understand what parts of the data system are too slow or fast enough, then I can start building my own database on top of the file system or other existing technologies that will give me the kind of performance I need.
Among the SQL databases, which one is best? There are many criteria I use to evaluate SQL databases, and the one I pay attention to most is how they comply (if at all) with the ACID model.
And given the technical merits of the various SQL databases, I consistently choose PostgreSQL above all other SQL databases when given a choice. Allow me to explain why.

The ACID Model

ACID is an acronym, standing for the four words AtomicityConsistencyIsolation, and Durability. These are fancy words for some very basic and essential concepts.
Atomicity means that you either do all of the changes you want, or none of them, without leaving the database in some weird in-between state. When you take into account catastrophes like power failures or corruption, atomicity isn't as simple as it first seems.
Consistency means that any state of the database will be internally consistent with the rules that constrain the data. That is, if you have a table with a primary key, then that table will not contain any violations of the primary key constraints after any transaction.
Isolation means that you can be modifying many different parts of the database at the same time without affecting each other. (As a higher feature, there is Serialization, which requires that transactions occur one after the other, or at least the results of transactions.)
Durability means that once a transaction completes, it is never lost, ever.
  • Atomicity: All or nothing
  • Consistency: Rules kept
  • Isolation: No partials seen
  • Durability: Doesn't disappear
ACID compliance isn't rocket science, but it isn't trivial either. These requirements form a minimum standard absolutely necessary to provide a database for a reasonable application.
That is, if you can't guarantee these things, then the users of your application are going to be frustrated since they assume, naturally, that the ACID model is followed. And if the users of the application get frustrated, then the developers of the application will get frustrated as they try to comply with the user's expectations.
A lot of frustration can be avoided if the database simply complies with the principles of the ACID model. If the database gets it right, then the rest of the application will have no problem getting it right as well. Our users will be happy since their expectations of ACID compliance will be met.
Remember: Users expect ACID!

What Violating the ACID Model Looks Like

To consider the importance of the ACID model, let's examine, briefly, what happens when the model is violated.
When Atomicity isn't adhered to, users will see their data partially committed. For instance, they might find their online profile only partially modified, or their bank transfer partially transferred. This is, of course, devastating to the unwary user.
When Consistency is violated, the rules that the data should follow aren't adhered to. Perhaps the number of friends shown doesn't match the friends they actually have in a social networking application. Or perhaps they see their bank balance doesn't match what the numbers add up to. Or worse, perhaps your order system is counting orders that don't even exist and not counting orders that do.
When Isolation isn't guaranteed, they will either have to use a system where only one person can change something at a time, locking out all others, or they will see inconsistencies throughout the world of data, inconsistencies resulting from transactions that are in progress elsewhere. This will make the data unreliable just like violating Atomicity or Consistency. A bank user, for instance, will believe their transfer of funds was successful when in reality their money was simultaneously being withdrawn by another transaction.
When Durability is lost, then users will never know if their transaction really went through, and won't mysteriously disappear down the road with all the trouble that entails.
I am sure we have all had experiences dealing with data systems that didn't follow the ACID model. I remember the days when you had to save your files frequently, and even then you still weren't ensured that all of your data would be properly saved. I also recall applications that would make partial changes, or incomplete changes, and expose these inconsistent states to the user.
In today's world, writing applications with faults like the above is simply inexcusable. There are too many tools out there that are readily available that make writing ACID compliant systems easy. One of those tools, probably the most popular of all, is the SQL database.

Satisfying ACID with Transactions

The principle way that databases comply with ACID requirements is through the concept of transactions.
Ideally, each transaction would occur in an instant, updating the database according to the state of the database at that moment. In reality, this isn't possible. It takes time to accumulate the data and apply the changes.
Typical transaction SQL commands:
  • BEGIN: Start a new transaction
  • COMMIT: Commit the transaction
  • ROLLBACK: Roll back the transaction in progress
Since multiple sessions can each be creating and applying a transaction simultaneously, special precautions have to be taken to ensure that the data that each transaction “sees” is consistent, and that the effects of each transaction appear all together or not at all. Special care is also taken to ensure that when a transaction is committed, the database will be put in a state where catastrophic events will not leave the transaction partially committed.
Contrary to popular belief, there are a variety of ways that databases support transactions. It is well worth the time to read and understand PostgreSQL's two levels of transaction isolation and the four possible isolation levels in Section 12.2 of the PostgreSQL documentation.
Note that some of the inferior levels of transaction isolation violate some extreme cases of ACID compliance for the sake of performance. These edge cases can be properly handled with appropriate use of row-locking techniques. Row-locking is an issue beyond this article.
Keep in mind that the levels of transaction isolation are only what appear to users of the database. Inside the database, there is a remarkable variety of methods on actually implementing transactions.
Consider that while you are in a transaction, making changes to the database, every other transaction has to see one version of the database while you see another. In effect, you have to have copies of some of the data lying around somewhere. Queries to that data have to know which version of the data to retrieve the copy, the original, or the modified version (and which modified version?) Changes to the data have to go somewhere the original, a copy, or some modified version (again, which?) Answering these questions leads to the various implementations of transactions in ACID compliant databases.
For the purposes of this article, I will examine only two: Oracle's and PostgreSQL's implementations. If you are only familiar with Oracle, then hopefully you will learn something new and fascinating as you investigate PostgreSQL's method.

Rollback Segments in Oracle

A simple implementation of an advanced transaction system is to simply store the modified data of each transaction in the data tables themselves. Meanwhile, information required to rollback any particular transaction is stored in the rollback segment. Since rollbacks are rare, it is a good guess to allow this data to accumulate and then to delete it once the transaction has fully committed.
Transactions that are concurrent need to see a consistent view of data. This means that instead of live rows in the table which have been updated by a more recent transaction, they may have to instead retrieve rows from the rollback segment.
Of course, this kind of scheme lead to dreaded errors with the rollback segment, such as ORA-1555 “Snapshot too old”. The net result of such errors is that long-running or large transactions have to be run when the database is effectively out of service, or not at all.

Multi-Version Concurrency Control (MVCC) in PostgreSQL

But there is another way, the way that PostgreSQL handles it.
PostgreSQL's MVCC keeps all of the versions of the data together in the same partition in the same table. By identifying which rows were added by which transactions, which rows were deleted by which transactions, and which transactions have actually committed, it becomes a straightforward check to see which rows are visible for which transactions.
The specific details of this MVCC are incredibly simple. Rows of a table are stored in PostgreSQL as a tuple. Two fields of each tuple are xmin and xmax. Xmin is the transaction ID of the transaction that created the tuple. Xmax is the transaction ID of the transaction that deleted it (if any).
Along with the tuples in each table, a record of each transaction and its current state (in progress, committed, aborted) is kept in a universal transaction log.
When data in a table is selected, only those rows that are created and not destroyed are seen. That is, each row's xmin is observed. If the xmin is a transaction that is in progress or aborted (but not the transaction doing the observing), then the row is invisible. If the xmin is a transaction that has committed or the current transaction, however, then the xmax is observed. If the xmax is a transaction that is in progress or aborted and not the current transaction, or if there is no xmax at all, then the row is seen. Otherwise, the row is considered as already deleted.
Insertions are straightforward. The transaction that inserts the tuple simply creates it with the xmax blank and the xmin set to its transaction ID. Whether the transaction is committed or aborted is irrelevant only the transaction's state needs to be updated.
Deletions are also straightforward. The tuple's xmax is set to the current transaction. Like insertions, whether the transaction is committed or aborted is irrelevant.
Updates are no more than a concurrent insert and delete.
PostgreSQL's MVCC method isn't intuitive, but it is simple and powerful. Instead of filling a rollback segment with long-running transactions, the table space itself is consumed.

VACUUMING

PostgreSQL's MVCC is remarkably simply and remarkably fast, except for one fatal flaw. Over time, data accumulates in the table space. Old rows that were deleted or updated don't go away. There is no automatic elimination process once a transaction commits. Not only will this data fill disks, but the database will slow down as the majority of tuples are ignored. Seemingly empty tables can be filled with millions and millions of no longer relevant tuples. Indexes can help in these scenarios, but table scans will take much longer than expected.
That's why you have to vacuum PostgreSQL tables. The vacuum process simple removes the tuples that are no longer needed, freeing up valuable space on the hard drive and increasing the performance of the database by limiting the number of tuples that are checked for each query.
How do you identify tuples that are expired? Xmax will be a transaction that has long ago committed, committed before any running transaction.
How often should a table be vacuumed? If none of the rows are ever deleted or updated, then vacuuming is never necessary. However, if a table is frequently updated or rows deleted, then vacuuming should be done regularly, depending on how frequent those changes occur.
Nowadays PostgreSQL ships with an auto-vacuum daemon that can automate vacuuming of tables to appropriate intervals depending on how each table is used.

Rollback Segments or MVCC?

Too many people think that Oracle's solution to ACID compliance is the best solution out there. I beg to differ, of course. Oracle's approach is one of many, and depending on your data needs, may not be the best.
I have seen PostgreSQL perform much better in areas that Oracle isn't very suitable. In general, PostgreSQL does well in cases where Oracle would fill its rollback partition. However, in other use cases Oracle is much better optimized for the task. I can't tell you precisely which situations are better or worse. It's something you'll have to discover for yourself.

Conclusion

ACID compliance is a standard we as developers need to strive for in all of our applications. We can no longer excuse ourselves from it claiming that it is too hard to get right. There are many choices out there that can give us ACID compliance, or something very close to it, and two of them are Oracle and PostgreSQL. But Oracle and PostgreSQL approach ACID compliance in different ways, giving them different performance characteristics. It's up to us to figure out which way will best suit our needs.

PostgreSQL vs. SQL Server, Oracle: Enterprise-ready and able to compete

Why are you paying so much in licensing costs and annual maintenance when you could use PostgreSQL for free, and get community support and upgrades for free as well? This is the question that Neil Matthew and Richard Stones pose to smaller companies regarding their less critical applications.
More on PostgreSQL:
Why PostgreSQL can best SQLServer, Oracle

Face-off: MySQL, PostgreSQL and SQL Server go head to head
Matthew and Stones, authors of Apress' Beginning Databases with PostgreSQL, discuss the advantages and drawbacks of using this open source relational database management system instead of Microsoft SQL Server or Oracle Database Management System. They also explain why the choice of database toolsets dictates database choices.
What are the most compelling reasons to dump or keep Microsoft SQL Server and bring in PostgreSQL?
Neil Matthew: For applications where downtime seriously threatens a company's financial results, executives are always going to want to deal with a big player who has the resources -- both technical and financial -- to 'be there' should things go wrong. The risk involved in the failure of a company's invoice and payment system for any significant period of time is huge. In an emergency, having companies the size of Microsoft or Oracle to call on may significantly mitigate that risk.
For many smaller, less critical applications, ask yourself why you are paying so much in licensing costs and annual maintenance when you could use PostgreSQL for free, and also get community support and upgrades free?
How does PostgreSQL's feature set compare to that of proprietary databases?
Richard Stones: In terms of standard SQL support, it's very good indeed. If a feature is in the SQL92 standard, you can be pretty sure that PostgreSQL is going to support it correctly.
Normally, Neil and I advise developers to stay away from extensions to the SQL standard in any case.
What are the advantages and weaknesses of PostgreSQL?
Stones: Technically, the main disadvantages of using PostgreSQL are in three areas. First, the ability to write functions and stored procedures is somewhat more limited than you would get with Oracle's PL/SQL or Sybase's T-SQL. Unless you are doing some extremely sophisticated work in stored procedures, this is not a major limitation.
Secondly, features for very large databases like table spaces, partitioned tables and highly complicated locking are still strongest in the proprietary databases vendors' offerings; however, PostgreSQL is moving forward in these areas all the time.
Finally, proprietary development tools are stronger. Microsoft particularly has excellent tools, which not surprisingly work best with a Microsoft product set. This toolset advantage does have an effect on the choice of database product.
Matthew: The advantages of PostgreSQL are cost and the ability to look at the source code to understand what's going on. Very few developers will ever make changes to the source or, even better, submit fixes. I do believe the ability to examine the code in order to understand why something doesn't behave as you expect is a great benefit.
Is PostgreSQL capable of competing, feature-wise, with Microsoft SQL Server or Oracle? It is enterprise ready?
Stones: In my opinion, PostgreSQL is enterprise ready. For many uses, PostgreSQL is just as suitable as Microsoft SQL Server or Oracle, but with a big cost advantage. The features that you need 95% of the time are there and work as expected. The underlying engine is very stable and copes well with a good range of data volumes. It also runs on your choice of hardware and operating system, not just whatever some big vendor might insist you buy to run your database .
Neil: Absolutely. Of course, it's not the solution to all database needs, any more than any other vendor's product would be. For a large, multi-terabyte data warehouse you still need a specialized database product with some advanced features, specifically for handling those kinds of data volumes.
What are some situations where PostgreSQL might be used in conjunction with Microsoft SQL Server or Oracle or MySQL?
Matthew: A wide range of needs naturally leads to a choice of solutions. By sticking to standard SQL92 functionality, companies can mix and match their database solutions to best fit the problem while minimizing costs and complexity. You can have large, expensive highly featured products for large complex problems and cost-effective, but still very reliable, products for more everyday needs.
Stones: It's difficult to see why you would use MySQL and PostgreSQL side-by-side. MySQL has historically traded some functionality for performance. For most purposes, you don't need to make that trade, and PostgreSQL performance is more than adequate.

PHP 6 Features

PHP is already popular, used in millions of domains (according to Netcraft), supported by most ISPs and used by household-name Web companies like Yahoo! The upcoming versions of PHP aim to add to this success by introducing new features that make PHP more usable in some cases and more secure in others. Are you ready for PHP V6? If you were upgrading tomorrow, would your scripts execute just fine or would you have work to do? This article focuses on the changes for PHP V6 — some of them back-ported to versions PHP V5.x — that could require some tweaks to your current scripts.
If you're not using PHP yet and have been thinking about it, take a look at its latest features. These features, from Unicode to core support for XML, make it even easier for you to write feature-filled PHP applications.
PHP V6 is currently available as a developer snapshot, so you can download and try out many of the features and changes listed in this article. For features that have been implemented in the current snapshot, see Resources.
Much improved for PHP V6 is support for Unicode strings in many of the core functions. This new feature has a big impact because it will allow PHP to support a broader set of characters for international support. So, if you're a developer or architect using a different language, such as the Java™ programming language, because it has better internationalization (i18n) support than PHP, it'll be time to take another look at PHP when the support improves.
Because you can download and use a developer's version of PHP V6 today, you will see some functions already supporting Unicode strings. For a list of functions that have been tested and verified to handle Unicode, seeResources.
What is Unicode?
Unicode is an industry-standard set of characters, character encoding, and encoding methodologies primarily aimed at enabling i18n and localization (i10n). The Unicode Transformation Format (UTF) specifies a way to encode characters for Unicode. For more information about Unicode and UTF, see Resources.
Namespaces are a way of avoiding name collisions between functions and classes without using prefixes in naming conventions that make the names of your methods and classes unreadable. So by using namespaces, you can have class names that someone else might use, but now you don't have to worry about running into any problems. Listing 1 provides an example of a namespace in PHP.
You won't have to update or change anything in your code because any PHP code you write that doesn't include namespaces will run just fine. Because the namespaces feature appears to be back-ported to V5.3 of PHP, when it becomes available, you can start to introduce namespaces into your own PHP applications.

Listing 1. Example of a namespace
<?php
// I'm not sure why I would implement my own XMLWriter, but at least
// the name of this one won't collide with the one built in to PHP
namespace NathanAGood;
class XMLWriter 
{
    // Implementation here...




}

$writer = new NathanAGood::XMLWriter();

?>

Depending on how you use PHP and what your scripts look like now, the language and syntax differences in PHP V6 may or may not affect you as much as the next features, which are those that directly allow you to introduce Web 2.0 features into your PHP application.
SOAP is one of the protocols that Web services "speak" and is supported in quite a few other languages, such as the Java programming language and Microsoft® .NET. Although there are other ways to consume and expose Web services, such as Representational State Transfer (REST), SOAP remains a common way of allowing different platforms to have interoperability. In addition to SOAP modules in the PHP Extension and Application Repository (PEAR) library, a SOAP extension to PHP was introduced in V5. This extension wasn't enabled by default, so you have to enable the extension or hope your ISP did. In addition, PEAR packages are available that allow you to build SOAP clients and servers, such as the SOAP package.
Unless you change the default, the SOAP extension will be enabled for you in V6. These extensions provide an easy way to implement SOAP clients and SOAP servers, allowing you to build PHP applications that consume and provide Web services.
If SOAP extensions are on by default, that means you won't have to configure them in PHP. If you develop PHP applications and publish them to an ISP, you may need to check with your ISP to verify that SOAP extensions will be enabled for you when they upgrade.
As of PHP V5.1, XMLReader and XMLWriter have been part of the core of PHP, which makes it easier for you to work with XML in your PHP applications. Like the SOAP extensions, this can be good news if you use SOAP or XML because PHP V6 will be a better fit for you than V4 out of the box.
The XMLWriter and XMLReader are stream-based object-oriented classes that allow you to read and write XML without having to worry about the XML details.

Back to top


In addition to having new features, PHP V6 will not have some other functions and features that have been in previous versions. Most of these things, such as register_globals and safe_mode, are widely considered "broken" in current PHP, as they may expose security risks. In an effort to clean up PHP, the functions and features listed in the next section will be removed, or deprecated, from PHP. Opponents of this removal will most likely cite issues with existing scripts breaking after ISPs or enterprises upgrade to PHP V6, but proponents of this cleanup effort will be happy that the PHP team is sewing up some holes and providing a cleaner, safer implementation.
Features that will be removed from the PHP version include:
  • magic_quotes
  • register_globals
  • register_long_arrays
  • safe_mode
Citing portability, performance, and inconvenience, the PHP documentation discourages the use of magic_quotes. It's so discouraged that it's being removed from PHP V6 altogether, so before upgrading to PHP V6, make sure that all your code avoids using magic_quotes. If you're using magic_quotes to escape strings for database calls, use your database implementation's parameterized queries, if they're supported. If not, use your database implementation's escape function, such as mysql_escape_string for MySQL or pg_escape_string for PostgreSQL. Listing 2 shows an example of magic_quotes use.

Listing 2. Using magic_quotes (discouraged)
<?php
// Assuming magic_quotes is on...
$sql = "INSERT INTO USERS (USERNAME) VALUES $_GET['username']";
?>

After preparing your PHP code for the new versions of PHP, your code should look like that in Listing 3.

Listing 3. Using parameterized queries (recommended)
<?php


// Using the proper parameterized query method for MySQL, as an example
$statement = $dbh->prepare("INSERT INTO USERS (USERNAME) VALUES ?");
$statement->execute(array($_GET['username']));




?>

Now that support for magic_quotes will be completely removed, the get_magic_quotes_gpc() function will no longer be available. This may affect some of the older PHP scripts, so before updating, make sure you fix any locations in which this functions exists.
The register_globals configuration key was already defaulted to off in PHP V4.2, which was controversial at the time. When register_globals is turned on, it was easy to use variables that could be injected with values from HTML forms. These variables don't really require initialization in your scripts, so it's easy to write scripts with gaping security holes. The register_globals documentation (see Resources) provides much more information aboutregister_globals. See Listing 4 for an example of using register_globals.

Listing 4. Using register_globals (discouraged)
<?php
// A security hole, because if register_globals is on, the value for user_authorized
// can be set by a user sending them on the query string 
// (i.e., http://www.example.com/myscript.php?user_authorized=true)




if ($user_authorized) {
    // Show them everyone's sensitive data...
}
?>

If your PHP code uses global variables, you should update it. If you don't update your code to get prepared for newer versions of PHP, consider updating it for security reasons. When you're finished, your code should look like Listing 5.

Listing 5. Being specific instead (recommended)
<?php
function is_authorized() {
    if (isset($_SESSION['user'])) {
        return true;
    } else {
        return false;
    }
}

$user_authorized = is_authorized();
?>




The register_long_arrays setting, when turned on, registers the $HTTP_*_VARS predefined variables. If you're using the longer variables, update now to use the shorter variables. This setting was introduced in PHP V5 — presumably for backward-compatibility — and the PHP folks recommend turning it off for performance reasons. Listing 6 shows an example of register_long-arrays use.

Listing 6. Using deprecated registered arrays (discouraged) 
<?php
    // Echo's the name of the user value given on the query string, like
    // http://www.example.com/myscript.php?username=ngood




    echo "Welcome, $HTTP_GET_VARS['username']!";
?>

If your PHP code looks like that shown in Listing 6, update it to look like that in Listing 7. Shut off theregister_long_arrays setting if it's on and test your scripts again.

Listing 7. Using $_GET (recommended)
<?php


    // Using the supported $_GET array instead.
    echo "Welcome, $_GET['username']!";
?>

The safe_mode configuration key, when turned on, ensures that the owner of a file being operated on matches the owner of the script that is executing. It was originally a way to attempt to handle security when operating in a shared server environment, like many ISPs would have. (For a link to a list of the functions affected by thissafe_mode change, see Resources.) Your PHP code will be unaffected by this change, but it's good to be aware of it in case you're setting up PHP in the future or counting on safe_mode in your scripts.
Microsoft Active Server Pages (ASP)-style tags — the shorter version of the PHP tags — are no longer supported. To make sure this is not an issue for your scripts, verify that you aren't using the <% or %> tags in your PHP files. Replace them with <?php and ?>.
The PHP team is removing support for both FreeType 1 and GD 1, citing the age and lack of ongoing developments of both libraries as the reason. Newer versions of both of these libraries are available that provide better functionality. For more information about FreeType and GD, see Resources.
The ereg extension, which supports Portable Operating System Interface (POSIX) regular expressions, is being removed from core PHP support. If you are using any of the POSIX regex functions, this change will affect you unless you include the ereg functionality. If you're using POSIX regex today, consider taking the time to update your regex functions to use the Perl-Compatible Regular Expression (PCRE) functions because they give you more features and perform better. Table 1 provides a list of the POSIX regex functions that will not be available after eregis removed. Their PCRE replacements are also shown.

Table 1. ereg() functions and their PCRE equivalents
ereg() functionSimilar preg() function
ereg()eregi()preg_match()
ereg_replace()ereg_replacei()preg_replace()

Installation commands for LAPP

#! /bin/bash
cd /usr/local
tar -xvjf src/postgresql-8.2.4.tar.bz2
cd postgresql-8.2.4/
./configure
gmake
gmake install
adduser -M postgres
groupadd apache
adduser -M -g apache apache
mkdir /usr/local/pgsql/data
chown postgres:postgres /usr/local/pgsql/data
su -c '/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data' postgres
su -c '/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1&' postgres
su -c '/usr/local/pgsql/bin/createuser sundar' postgres
cp contrib/start-scripts/linux /etc/init.d/postgresql
chmod a+x /etc/init.d/postgresql
cp /usr/local/postgresql-8.0.3/contrib/reindexdb/reindexdb /usr/local/pgsql/bin
chmod a+x /usr/local/pgsql/bin/reindexdb
cd /etc/init.d
chkconfig --add postgresql
chkconfig postgresql on
/etc/init.d/postgresql restart
sed -i '/\/usr\/local\/pgsql\/bin/!s/\(^PATH\) *=\(.*\)/\1=\/usr\/local\/pgsql\/bin\:\2/' /etc/skel/.bash_profile
cp /etc/skel/.bash_profile /home/sundar
echo Postgresql Installation Completed

cd /usr/local            
tar -xvzf src/apache_1.3.37.tar.Z
tar -xvjf src/php-5.2.5.tar.gz
cd apache_1.3.37/
./configure --enable-module=so
gmake
gmake install
echo Apache Installation Completed
cd ../php-5.2.5/
./configure --with-pgsql --with-apxs=/usr/local/apache/bin/apxs --with-curl=/usr --enable-calendar --with-curl --enable-dba --enable-ftp --with-gd --with-jpeg-dir --with-png-dir --with-ttf --with-freetype-dir --enable-gd-native-ttf --with-mime-magic --with-zlib-dir --enable-soap --with-regex=php
gmake
gmake install
cp /usr/local/src/httpd.conf /usr/local/apache/conf/httpd.conf
cp /usr/local/src/httpd /etc/init.d/httpd

cd /etc/init.d
chkconfig --add httpd
chkconfig httpd on
/etc/init.d/httpd restart
cp /usr/local/src/phpinfo.php /usr/local/apache/htdocs
echo PHP Installation Completed