Quantcast
Channel: SHINE SERVERS LLP Blog
Viewing all 110 articles
Browse latest View live

How To Optimise MySQL & Apache On cPanel/WHM

$
0
0

On this optimization process we will go over the Apache core configuration and modules that are part of Apache core. We think that with the correct settings of Apache and MySQL you can get excellent results and the correct level of resource use without installing third-party proxy and cache modules. So let’s start,

 

Apache & PHP

In the first stage we run the Easy Apache and selected the following:

* Apache Version 2.4+

* PHP Version 5.4+

* In step 5 “Exhaustive Options List” select

- Deflate

- Expires

- MPM Prefork

- MPM Worker

After Easy Apache finished go to your WHM » Service Configuration » Apache Configuration » “Global Configuration” and set the values by the level of resources available on your server.

Apache Directive 	 	(From 2GB memory or less and up to 12GB memory) 	 	

StartServers 	 	 	4 	 	8 	 	16 	
MinSpareServers 	 	4 	 	8 	 	16 	
MaxSpareServers 	 	8 	 	16 	 	32 	
ServerLimit 	 	 	64 	 	128 	 	256 	
MaxRequestWorkers 	 	50 	 	120 	 	250 	
MaxConnectionsPerChild 	 	1000 	 	2500 	 	5000 
Keep-Alive			On		On		On
Keep-Alive Timeout	 	5	 	5	 	 5
Max Keep-Alive Requests		50	 	120	 	120
Timeout				30		60		60

 

Now go to WHM » Service Configuration » Apache Configuration » Include Editor » “Pre VirtualHost Include” and allow users minimal cache and data compression to allow the server to work less for the same things by pasting the code below into the text field.

# Cache Control Settings for one hour cache
<FilesMatch ".(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$">
Header set Cache-Control "max-age=3600, public"
</FilesMatch>

<FilesMatch ".(xml|txt)$">
Header set Cache-Control "max-age=3600, public, must-revalidate"
</FilesMatch>

<FilesMatch ".(html|htm)$">
Header set Cache-Control "max-age=3600, must-revalidate"
</FilesMatch>

# Mod Deflate performs data compression
<IfModule mod_deflate.c>
<FilesMatch ".(js|css|html|php|xml|jpg|png|gif)$">
SetOutputFilter DEFLATE
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4.0[678] no-gzip
BrowserMatch bMSIE no-gzip
</FilesMatch>
</IfModule>

 

Go to WHM » Service Configuration » “PHP Configuration Editor” and set the parameters according to your needs:

- memory_limit

- max_execution_time

- max_input_time

 

MySQL

For MySQL you need to update the configuration file that usually in /etc/my.cnf

Best config base on 1 core & 2GB memory MySQL 5.5:

[mysqld]
    local-infile = 0
    max_connections = 250
    key_buffer = 64M
    myisam_sort_buffer_size = 64M
    join_buffer_size = 1M
    read_buffer_size = 1M
    sort_buffer_size = 2M
    max_heap_table_size = 16M
    table_cache = 5000
    thread_cache_size = 286
    interactive_timeout = 25
    wait_timeout = 7000
    connect_timeout = 15
    max_allowed_packet = 16M
    max_connect_errors = 10
    query_cache_limit = 2M
    query_cache_size = 32M
    query_cache_type = 1
    tmp_table_size = 16M
    open_files_limit=2528

[mysqld_safe]

[mysqldump]
    quick
    max_allowed_packet = 16M
[myisamchk]
    key_buffer = 64M
    sort_buffer = 64M
    read_buffer = 16M
    write_buffer = 16M
[mysqlhotcopy]
    interactive-timeout

 

Best config base on 8 core & 12GB memory (Shared server) MySQL 5.5:

[mysqld]
local-infile=0
max_connections = 600
max_user_connections=1000
key_buffer_size = 512M
myisam_sort_buffer_size = 64M
read_buffer_size = 1M
table_open_cache = 5000
thread_cache_size = 384
wait_timeout = 20
connect_timeout = 10
tmp_table_size = 256M
max_heap_table_size = 128M
max_allowed_packet = 64M
net_buffer_length = 16384
max_connect_errors = 10
concurrent_insert = 2
read_rnd_buffer_size = 786432
bulk_insert_buffer_size = 8M
query_cache_limit = 5M
query_cache_size = 128M
query_cache_type = 1
query_prealloc_size = 262144
query_alloc_block_size = 65535
transaction_alloc_block_size = 8192
transaction_prealloc_size = 4096
max_write_lock_count = 8
slow_query_log
log-error
external-locking=FALSE
open_files_limit=50000

[mysqld_safe]

[mysqldump]
quick
max_allowed_packet = 16M

[isamchk]
key_buffer = 384M
sort_buffer = 384M
read_buffer = 256M
write_buffer = 256M

[myisamchk]
key_buffer = 384M
sort_buffer = 384M
read_buffer = 256M
write_buffer = 256M

#### Per connection configuration ####
sort_buffer_size = 1M
join_buffer_size = 1M
thread_stack = 192K

 

Repair & optimize databases then restart MySQL:

mysqlcheck --check --auto-repair --all-databases
mysqlcheck --optimize --all-databases
/etc/init.d/mysql restart

 

Security & Limit Resources

 

Install CSF (ConfigServer Security & Firewall) at: http://configserver.com/free/csf/install.txt

1) Go to WHM » Plugins » ConfigServer Security & Firewall » “Check Server Security” And pass on what appears as required to repair:

2) Go to WHM » Plugins » ConfigServer Security & Firewall » “Firewall Configuration” and set the parameters according to your needs:

PT_USERMEM=180

PT_USERTIME=180

PT_USERKILL=1

PT_USERKILL_ALERT=1 (Optional)

 

Now enjoy your new fast and more effective server.

The post How To Optimise MySQL & Apache On cPanel/WHM appeared first on Shine Servers.


Update / Install Packages Under Redhat Enterprise / CentOS Linux Version 6.x

$
0
0

How do I use yum command to update and patch my Red hat Enterprise Linux / CentOS Linux version 6.x server via RHN / Internet? Can I use up2date command under RHEL 6?

up2date command was part of RHEL v4.x or older version. You need to use yum command to update and patch the system using RHN or Internet. Use yum command to install critical and non-critical security updates as well as binary packages. Login as the root user to install and update the system.

Task: Register my system with RHN

To register your system with RHN type the following command and just follow on screen instructions (CentOS user skip to next step):
# rhn_register

Task: Display list of updated software (security fix)

Type the following command at shell prompt:
# yum list updates

Task: Patch up system by applying all updates

To download and install all updates type the following command:
# yum update

Task: List all installed packages

List all installed packages, enter:
# rpm -qa
# yum list installed

Find out if httpd package installed or not, enter:
# rpm -qa | grep httpd*
# yum list installed httpd

Task: Check for and update specified packages

# yum update {package-name-1}
To check for and update httpd package, enter:
# yum update httpd

Task: Search for packages by name

Search httpd and all matching perl packages, enter:
# yum list {package-name}
# yum list {regex}
# yum list httpd
# yum list perl*

Sample output:

Loading "installonlyn" plugin
Loading "security" plugin
Setting up repositories
Reading repository metadata in from local files
Installed Packages
perl.i386                                4:5.8.8-10.el5_0.2     installed
perl-Archive-Tar.noarch                  1.30-1.fc6             installed
perl-BSD-Resource.i386                   1.28-1.fc6.1           installed
perl-Compress-Zlib.i386                  1.42-1.fc6             installed
perl-DBD-MySQL.i386                      3.0007-1.fc6           installed
perl-DBI.i386                            1.52-1.fc6             installed
perl-Digest-HMAC.noarch                  1.01-15                installed
perl-Digest-SHA1.i386                    2.11-1.2.1             installed
perl-HTML-Parser.i386                    3.55-1.fc6             installed
.....
.......
..
perl-libxml-perl.noarch                  0.08-1.2.1             base
perl-suidperl.i386                       4:5.8.8-10.el5_0.2     updates

Task: Install the specified packages [ RPM(s) ]

Install package called httpd:
# yum install {package-name-1} {package-name-2}
# yum install httpd

Task: Remove / Uninstall the specified packages [ RPM(s) ]

Remove package called httpd, enter:
# yum remove {package-name-1} {package-name-2}
# yum remove httpd

Task: Display the list of available packages

# yum list all

Task: Display list of group software

Type the following command:
# yum grouplist
Output:

Installed Groups:
   Engineering and Scientific
   MySQL Database
   Editors
   System Tools
   Text-based Internet
   Legacy Network Server
   DNS Name Server
   Dialup Networking Support
   FTP Server
   Network Servers
   Legacy Software Development
   Legacy Software Support
   Development Libraries
   Graphics
   Web Server
   Ruby
   Printing Support
   Mail Server
   Server Configuration Tools
   PostgreSQL Database
Available Groups:
   Office/Productivity
   Administration Tools
   Beagle
   Development Tools
   GNOME Software Development
   X Software Development
   Virtualization
   GNOME Desktop Environment
   Authoring and Publishing
   Mono
   Games and Entertainment
   XFCE-4.4
   Tomboy
   Java
   Java Development
   Emacs
   X Window System
   Windows File Server
   KDE Software Development
   KDE (K Desktop Environment)
   Horde
   Sound and Video
   FreeNX and NX
   News Server
   Yum Utilities
   Graphical Internet
Done

Task: Install all the default packages by group

Install all ‘Development Tools’ group packages, enter:
# yum groupinstall "Development Tools"

Task: Update all the default packages by group

Update all ‘Development Tools’ group packages, enter:
# yum groupupdate "Development Tools"

Task: Remove all packages in a group

Remove all ‘Development Tools’ group packages, enter:
# yum groupremove "Development Tools"

Task: Install particular architecture package

If you are using 64 bit RHEL version it is possible to install 32 packages:
# yum install {package-name}.{architecture}
# yum install mysql.i386

Task: Display packages not installed via official RHN subscribed repos

Show all packages not available via subscribed channels or repositories i.e show packages installed via other repos:
# yum list extras
Sample output:

Loading "installonlyn" plugin
Loading "security" plugin
Setting up repositories
Reading repository metadata in from local files
Extra Packages
DenyHosts.noarch                         2.6-python2.4          installed
VMwareTools.i386                         6532-44356             installed
john.i386                                1.7.0.2-3.el5.rf       installed
kernel.i686                              2.6.18-8.1.15.el5      installed
kernel-devel.i686                        2.6.18-8.1.15.el5      installed
lighttpd.i386                            1.4.18-1.el5.rf        installed
lighttpd-fastcgi.i386                    1.4.18-1.el5.rf        installed
psad.i386                                2.1-1                  installed
rssh.i386                                2.3.2-1.2.el5.rf       installed

Task: Display what package provides the file

You can easily find out what RPM package provides the file. For example find out what provides the /etc/passwd file:
# yum whatprovides /etc/passwd
Sample output:

Loading "installonlyn" plugin
Loading "security" plugin
Setting up repositories
Reading repository metadata in from local files
setup.noarch                             2.5.58-1.el5           base
Matched from:
/etc/passwd
setup.noarch                             2.5.58-1.el5           installed
Matched from:
/etc/passwd

You can use same command to list packages that satisfy dependencies:
# yum whatprovides {dependency-1} {dependency-2}
Refer yum command man page for more information:
# man yum

The post Update / Install Packages Under Redhat Enterprise / CentOS Linux Version 6.x appeared first on Shine Servers.

How To Use Nginx As Reverse Proxy Server

$
0
0

Nginx is an open source Web server and a reverse proxy server. You can use nginx for a load balancing and/or as a proxy solution to run services from inside those machines through your host’s single public IP address such as 202.54.1.1. In this post, I will explain how to install nginx as reverse proxy server for Apache+php5 domain called www.example.com and Lighttpd static asset domain called static.example.com. You need to type the following commands on vm00having an IP address 192.168.1.1 only.

DNS Setup

Make sure both www.example.com and static.example.com point to public IP address 202.54.1.1.

Install nginx server

Type the following command to install nginx web server:
$ cd /tmp
$ wget http://nginx.org/packages/rhel/6/noarch/RPMS/nginx-release-rhel-6-0.el6.ngx.noarch.rpm
# rpm -iv nginx-release-rhel-6-0.el6.ngx.noarch.rpm
# yum install nginx

Sample outputs:

Loaded plugins: rhnplugin
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package nginx.x86_64 0:1.2.1-1.el6.ngx will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=========================================================================
 Package      Arch          Version                   Repository    Size
=========================================================================
Installing:
 nginx        x86_64        1.2.1-1.el6.ngx           nginx        331 k
Transaction Summary
=========================================================================
Install       1 Package(s)
Total download size: 331 k
Installed size: 730 k
Is this ok [y/N]: y
Downloading Packages:
nginx-1.2.1-1.el6.ngx.x86_64.rpm                  | 331 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
  Installing : nginx-1.2.1-1.el6.ngx.x86_64                          1/1
----------------------------------------------------------------------
Thanks for using NGINX!
Check out our community web site:
* http://nginx.org/en/support.html
If you have questions about commercial support for NGINX please visit:
* http://www.nginx.com/support.html
----------------------------------------------------------------------
  Verifying  : nginx-1.2.1-1.el6.ngx.x86_64                          1/1
Installed:
  nginx.x86_64 0:1.2.1-1.el6.ngx
Complete!

Configure the nginx web server as reverse proxy

Edit /etc/nginx/conf.d/default.conf, enter:
# vi /etc/nginx/conf.d/default.conf
Add/correct as follows:

 
## Basic reverse proxy server ##
## Apache (vm02) backend for www.example.com ##
upstream apachephp  {
      server 192.168.1.11:80; #Apache1
}

## Lighttpd (vm01) backend for static.example.com ##
upstream lighttpd  {
      server 192.168.1.10:80; #Lighttpd1
}

## Start www.example.com ##
server {
    listen       202.54.1.1:80;
    server_name  www.example.com;

    access_log  /var/log/nginx/log/www.example.access.log  main;
    error_log  /var/log/nginx/log/www.example.error.log;
    root   /usr/share/nginx/html;
    index  index.html index.htm;

    ## send request back to apache1 ##
    location / {
     proxy_pass  http://apachephp;
     proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
     proxy_redirect off;
     proxy_buffering off;
     proxy_set_header        Host            $host;
     proxy_set_header        X-Real-IP       $remote_addr;
     proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
   }
}
## End www.example.com ##

## START static.example.com ##
server {
   listen      202.54.1.1:80;
   server_name static.example.com;
   access_log  /var/log/nginx/log/static.example.com.access.log  main;
   error_log   /var/log/nginx/log/static.example.com.error.log;
   root        /usr/local/nginx/html;
   index       index.html;

   location / {
        proxy_pass  http://lighttpd;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_redirect off;
        proxy_buffering off;
        proxy_set_header        Host            static.example.com;
        proxy_set_header        X-Real-IP       $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}
## END static.example.com  ##

Turn on Nginx

Type the following commands:
# chkconfig nginx on
# service nginx start

Configure firewall

Set firewall as follows:

  • Drop all INPUT/OUTPUT chain traffic by default.
  • Only open tcp port 202.54.1.1:80 and/or 443 on eth0 only.
  • Set eth1 as trusted device so that communication take place between nginx reverse proxy and Apache/Lighttpd backend servers.

Run the following command to set and customize firewall as described above:
# system-config-firewall-tui
You can edit /etc/sysconfig/iptables manually and set the firewall too.

/etc/sysctl.conf

Edit /etc/sysctl.conf as follows:

 
# Execshild
kernel.exec-shield = 1
kernel.randomize_va_space = 1

# IPv4 settings
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# Increase system file descriptor limit to
fs.file-max = 50000

# Increase system IP port limits
net.ipv4.ip_local_port_range = 2000 65000

# Ipv6
net.ipv6.conf.default.router_solicitations = 0
net.ipv6.conf.default.accept_ra_rtr_pref = 0
net.ipv6.conf.default.accept_ra_pinfo = 0
net.ipv6.conf.default.accept_ra_defrtr = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.default.dad_transmits = 0
net.ipv6.conf.default.max_addresses = 1

Load new Linux kernel settings, run:
# sysctl -p

The post How To Use Nginx As Reverse Proxy Server appeared first on Shine Servers.

cPanel Optimize Website No longer working

$
0
0

When client tries to enable or disable “Optimize Website” in cPanel, this error is shown:

OptimizeWS::optimizews(,) failed: Modification of non-creatable array value attempted, subscript -1 at /usr/local/cpanel/Cpanel/OptimizeWS.pm line 104, <HC> line 52.

Here is a Solution:

To be certain you are not over-writing any existing data:

# mv /home/[cPanel user]/.htaccess /home/[cPanel user]/.htaccess.bak
# echo > /home/[cPanel user]/.htaccess; chown [cPanel user].[cPanel user] /home/[cPanel user]/.htaccess

cPanel >> Software/Services >> Optimize Website should work as expected once there is an existing .htaccess file with some content in /home/[cPanel user]/.htaccess

Let me know if anything else is needed, i’ll make sure it get fixed for you.

Thanks

The post cPanel Optimize Website No longer working appeared first on Shine Servers.

How To Use MySQL Query Profiling

$
0
0

What is the MySQL slow query log?

The MySQL slow query log is a log that MySQL sends slow, potentially problematic queries to. This logging functionality comes with MySQL but is turned off by default. What queries are logged is determined by customizable server variables that allow for query profiling based on an application’s performance requirements. Generally the queries that are logged are queries that take longer than a specified amount of time to execute or queries that do not properly hit indexes.

Setting up profiling variables

The primary server variables for setting up the MySQL slow query log are:

slow_query_log			G 
slow_query_log_file			G 
long_query_time			G / S
log_queries_not_using_indexes	G
min_examined_row_limit		G / S

NOTE: (G) global variable, (S) session variable

slow_query_log - Boolean for turning the slow query log on and off.

slow_query_log_file - The absolute path for the query log file. The file’s directory should be owned by the mysqld user and have the correct permissions to be read from and written to. The mysql daemon will likely be running as `mysql` but to verify run the following in the Linux terminal:

 ps -ef | grep bin/mysqld | cut -d' ' -f1

The output will likely display the current user as well as the mysqld user. An example of setting the directory path /var/log/mysql:

cd /var/log
mkdir mysql
chmod 755 mysql
chown mysql:mysql mysql

long_query_time - The time, in seconds, for checking query length. For a value of 5, any query taking longer than 5s to execute would be logged.

log_queries_not_using_indexes - Boolean value whether to log queries that are not hitting indexes. When doing query analysis, it is important to log queries that are not hitting indexes.

min_examined_row_limit - Sets a lower limit on how many rows should be examined. A value of 1000 would ignore any query that analyzes less than 1000 rows.

The MySQL server variables can be set in the MySQL conf file or dynamically via a MySQL GUI or MySQL command line. If the variables are set in the conf file, they will be persisted when the server restarts but will also require a server restart to become active. The MySQL conf file is usually located in `/etc or /usr`, typically `/etc/my.cnf` or `/etc/mysql/my.cnf`. To find the conf file (may have to broaden search to more root directories):

find /etc -name my.cnf
find /usr -name my.cnf

Once the conf file has been found, simply append the desired values under the [mysqld] heading:

[mysqld]
….
slow-query-log = 1
slow-query-log-file = /var/log/mysql/localhost-slow.log
long_query_time = 1
log-queries-not-using-indexes

Again, the changes will not take affect until after a server restart, so if the changes are needed immediately then set the variables dynamically:

mysql> SET GLOBAL slow_query_log = 'ON';
mysql> SET GLOBAL slow_query_log_file = '/var/log/mysql/localhost-slow.log';
mysql> SET GLOBAL log_queries_not_using_indexes = 'ON';
mysql> SET SESSION long_query_time = 1;
mysql> SET SESSION min_examined_row_limit = 100;

To check the variable values:

mysql> SHOW GLOBAL VARIABLES LIKE 'slow_query_log';
mysql> SHOW SESSION VARIABLES LIKE 'long_query_time';

One drawback to setting MySQL variables dynamically is that the variables will be lost upon server restart. It is advisable to add any important variables that you need to be persisted to the MySQL conf file.

NOTE: The syntax for setting variables dynamically via SET and placing them into the conf file are slightly different, e.g. `slow_query_log` vs. `slow-query-log`. View MySQL’s dynamic system variables page for the different syntaxes. The Option-File Format is the format for the conf file and System Variable Name is the variable name for setting the variables dynamically.

Generating query profile data

Now that the MySQL slow query log configurations have been outlined, it is time to generate some query data for profiling. This example was written on a running MySQL instance with no prior slow log configurations set. The example’s queries can be run via a MySQL GUI or through the MySQL command prompt. When monitoring the slow query log, it is useful to have two connection windows open to the server: one connection for writing the MySQL statements and one connection for watching the query log.

In the MySQL console tab, log into MySQL server with a user who has SUPER ADMIN privileges. To start, create a test database and table, add some dummy data, and turn on the slow query log. This example should be run in a development environment, ideally with no other applications using MySQL to help avoid cluttering the query log as it is being monitored:

$> mysql -u <user_name> -p

mysql> CREATE DATABASE profile_sampling;
mysql> USE profile_sampling;
mysql> CREATE TABLE users ( id TINYINT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(255) );
mysql> INSERT INTO users (name) VALUES ('Walter'),('Skyler'),('Jesse'),('Hank'),('Walter Jr.'),('Marie'),('Saul'),('Gustavo'),('Hector'),('Mike');mysql> SET GLOBAL slow_query_log = 1;
mysql> SET GLOBAL slow_query_log_file = '/var/log/mysql/localhost-slow.log';
mysql> SET GLOBAL log_queries_not_using_indexes = 1;
mysql> SET long_query_time = 10;
mysql> SET min_examined_row_limit = 0;

There is now a test database and table with a small amount of test data. The slow query log was turned on but the query time was intentionally set high and the minimum row examined flag kept off. In the console tab for viewing the log:

cd /var/log/mysql
ls -l

There should be no slow query log in the folder yet, as no queries have been run. If there is, that means that the slow query log has been turned on and configured in the past, which may skew some of this example’s results. Back in the MySQL tab, run the following SQL:

mysql> USE profile_sampling;
mysql> SELECT * FROM users WHERE id = 1;

The query executed was a simple select using the Primary Key index from the table. This query was fast and used an index, so there will be no entries in the slow query log for this query. Look back in the query log directory and verify that no log was created. Now back in your MySQL window run:

mysql> SELECT * FROM users WHERE name = 'Jesse';

This query was run on a non indexed column – name. At this point there will be a query in the log with the following info (may not be exactly the same):

/var/log/mysql/localhost-slow.log

# Time: 140322 13:54:58
# User@Host: root[root] @ localhost []
# Query_time: 0.000303  Lock_time: 0.000090 Rows_sent: 1  Rows_examined: 10
use profile_sampling;
SET timestamp=1395521698;
SELECT * FROM users WHERE name = 'Jesse';

The query has been successfully logged. One more example. Raise the minimum examined row limit and run a similar query:

mysql> SET min_examined_row_limit = 100;
mysql> SELECT * FROM users WHERE name = 'Walter';

No data will be added to the log because the minimum of 100 rows was not analyzed.

NOTE: If there is no data being populated into the log, there are a couple of things that can be checked. First the permissions of the directory in which the log is being created in. The owner/group should be the same as the mysqld user (see above for example) as well as have correct permissions, chmod 755 to be sure. Second, there may have been existing slow query variable configurations that are interfering with the example. Reset the defaults by removing any slow query variables from the conf file and restarting the server, or set the global variables dynamically back to their default values. If the changes are made dynamically, logout and log back into MySQL to ensure the global updates take effect.

 

Analyzing query profile information

Looking at the query profile data from the above example:

# Time: 140322 13:54:58
# User@Host: root[root] @ localhost []
# Query_time: 0.000303  Lock_time: 0.000090 Rows_sent: 1  Rows_examined: 10
use profile_sampling;
SET timestamp=1395521698;
SELECT * FROM users WHERE name = 'Jesse';

The entry displays:

  • Time at which the query was ran
  • Who ran it
  • How long the query took
  • Length of the lock
  • How many rows where returned
  • How many rows where examined

This is useful because any query that violates the performance requirements specified with the server variables will end up in the log. This allows a developer, or admin, to have MySQL alert them when a query is not performing as well as it should [opposed to reading through source code and trying to find poorly written queries]. Also, the query profiling data can be useful when it is profiled over a period of time, which can help determine what circumstances are contributing to poor application performance.

Using mysqldumpslow

In a more realistic example, profiling would be enabled on a database driven application, providing a moderate stream of data to profile against. The log would be continually getting written to, likely more frequently than anybody would be watching. As the log size grows, it becomes difficult to parse through all the data and problematic queries easily get lost in the log. MySQL offers another tool, mysqldumpslow, that helps avoid this problem by breaking down the slow query log. The binary is bundled with MySQL (on Linux) so to use it simply run the command and pass in the log path:

mysqldumpslow -t 5 -s at /var/log/mysql/localhost-slow.log

There are various parameters that can be used with the command to help customize output. In the above example the top 5 queries sorted by the average query time will be displayed. The resulting rows are more readable as well as grouped by query (this output is different from the example to demonstrate high values):

 

Count: 2  Time=68.34s (136s)  Lock=0.00s (0s)  Rows=39892974.5 (79785949), root[root]@localhost
  SELECT PL.pl_title, P.page_title
  FROM page P
  INNER JOIN pagelinks PL
  ON PL.pl_namespace = P.page_namespace
  WHERE P.page_namespace = N
…

The data being displayed:

  • Count – How many times the query has been logged
  • Time – Both the average time and the total time in the ()
  • Lock – Table lock time
  • Rows – Number of rows returned

The command abstracts numbers and strings, so the same queries with different WHERE clauses will be counted as the same query (notice the page_namespace = N). Having a tool like mysqldumpslow prevents the need to constantly watch the slow query log, instead allowing for periodic or automated checks. The parameters to the mysqldumpslow command allow for some complex expression matching which help drill down into the various queries in the log.

There are also 3rd party log analysis tools available that offer different data views, a popular one being pt-query-digest.

Query breakdown

One last profiling tool to be aware of is the tool which allows for a complex break down of a query. A good use case for the tool is grabbing a problematic query from the slow query log and running it directly in MySQL. First profiling must be turned on, then the query is ran:

mysql> SET SESSION profiling = 1;
mysql> USE profile_sampling;
mysql> SELECT * FROM users WHERE name = 'Jesse';
mysql> SHOW PROFILES;

After profiling has been turned on, the SHOW PROFILES will show a table linking a Query_ID to a SQL statement. Find the Query_ID corresponding to the query ran and run the following query (replace # with your Query_ID):

mysql> SELECT * FROM INFORMATION_SCHEMA.PROFILING WHERE QUERY_ID=#;

Sample Output:

SEQ STATE DURATION
1 starting 0.000046
2 checking permissions 0.000005
3 opening tables 0.000036

The STATE is the “step” in the process of executing the query, and the DURATION is how long that step took to complete, in seconds. This isn’t an overly useful tool, but it is interesting and can help determine what part of the query execution is causing the most latency.

For a detailed outline of the various columns:http://dev.mysql.com/doc/refman/5.5/en/profiling-table.html

For a detailed overview of the various “steps”:http://dev.mysql.com/doc/refman/5.5/en/general-thread-states.html

NOTE: This tool should NOT be used in a production environment rather for analyzing specific queries.

Slow query log performance

One last question to address is how the slow query log will affect performance. In general it is safe to run the slow query log in a production environment; neither the CPU nor the I/O load should be a concern ¹ ². However, there should be some strategy for monitoring the log size to ensure the log file size does not get too big for the file system. Also, a good rule of thumb when running the slow query log in a production environment is to leave long_query_time at 1s or higher.

IMPORTANT: It is not a good idea to use the profiling tool, SET profiling=1, nor to log all queries, i.e. the general_log variable, in a production, high workload environment.

Conclusion

The slow query log is extremely helpful in singling out problematic queries and profiling overall query performance. When query profiling with the slow query log, a developer can get an in-depth understanding of how an application’s MySQL queries are performing. Using a tool such as mysqldumpslow, monitoring and evaluating the slow query log becomes manageable and can easily be incorporated into the development process. Now that problematic queries have been identified, the next step is to tune the queries for maximum performance.

The post How To Use MySQL Query Profiling appeared first on Shine Servers.

How To Set Up mod_security with Apache on Debian/Ubuntu

$
0
0

Installing mod_security


Modsecurity is available in the Debian/Ubuntu repository:

apt-get install libapache2-modsecurity

Verify if the mod_security module was loaded.

apachectl -M | grep --color security

You should see a module named security2_module (shared) which indicates that the module was loaded.

Modsecurity’s installation includes a recommended configuration file which has to be renamed:

mv /etc/modsecurity/modsecurity.conf{-recommended,}

Reload Apache

service apache2 reload

You’ll find a new log file for mod_security in the Apache log directory:

root@droplet:~# ls -l /var/log/apache2/modsec_audit.log
-rw-r----- 1 root root 0 Oct 19 08:08 /var/log/apache2/modsec_audit.log

Configuring mod_security


Out of the box, modsecurity doesn’t do anything as it needs rules to work. The default configuration file is set to DetectionOnly which logs requests according to rule matches and doesn’t block anything. This can be changed by editing the modsecurity.conf file:

nano /etc/modsecurity/modsecurity.conf

Find this line

SecRuleEngine DetectionOnly

and change it to:

SecRuleEngine On

If you’re trying this out on a production server, change this directive only after testing all your rules.

Another directive to modify is SecResponseBodyAccess. This configures whether response bodies are buffered (i.e. read by modsecurity). This is only neccessary if data leakage detection and protection is required. Therefore, leaving it On will use up droplet resources and also increase the logfile size.

Find this

SecResponseBodyAccess On

and change it to:

SecResponseBodyAccess Off

Now we’ll limit the maximum data that can be posted to your web application. Two directives configure these:

SecRequestBodyLimit
SecRequestBodyNoFilesLimit

The SecRequestBodyLimit directive specifies the maximum POST data size. If anything larger is sent by a client the server will respond with a 413 Request Entity Too Large error. If your web application doesn’t have any file uploads this value can be greatly reduced.

The value mentioned in the configuration file is

SecRequestBodyLimit 13107200

which is 12.5MB.

Similar to this is the SecRequestBodyNoFilesLimit directive. The only difference is that this directive limits the size of POST data minus file uploads– this value should be “as low as practical.”

The value in the configuration file is

SecRequestBodyNoFilesLimit 131072

which is 128KB.

Along the lines of these directives is another one which affects server performance: SecRequestBodyInMemoryLimit. This directive is pretty much self-explanatory; it specifies how much of “request body” data (POSTed data) should be kept in the memory (RAM), anything more will be placed in the hard disk (just like swapping). Since droplets use SSDs, this is not much of an issue; however, this can be set a decent value if you have RAM to spare.

SecRequestBodyInMemoryLimit 131072

This is the value (128KB) specified in the configuration file.

Testing SQL Injection


Before going ahead with configuring rules, we will create a PHP script which is vulnerable to SQL injection and try it out. Please note that this is just a basic PHP login script with no session handling. Be sure to change the MySQL password in the script below so that it will connect to the database:

/var/www/login.php

<html>
<body>
<?php
    if(isset($_POST['login']))
    {
        $username = $_POST['username'];
        $password = $_POST['password'];
        $con = mysqli_connect('localhost','root','password','sample');
        $result = mysqli_query($con, "SELECT * FROM `users` WHERE username='$username' AND password='$password'");
        if(mysqli_num_rows($result) == 0)
            echo 'Invalid username or password';
        else
            echo '<h1>Logged in</h1><p>A Secret for you....</p>';
    }
    else
    {
?>
        <form action="" method="post">
            Username: <input type="text" name="username"/><br />
            Password: <input type="password" name="password"/><br />
            <input type="submit" name="login" value="Login"/>
        </form>
<?php
    }
?>
</body>
</html>

This script will display a login form. Entering the right credentials will display a message “A Secret for you.”

We need credentials in the database. Create a MySQL database and a table, then insert usernames and passwords.

mysql -u root -p

This will take you to the mysql> prompt

create database sample;
connect sample;
create table users(username VARCHAR(100),password VARCHAR(100));
insert into users values('jesin','pwd');
insert into users values('alice','secret');
quit;

Open your browser, navigate to http://yourwebsite.com/login.php and enter the right pair of credentials.

Username: jesin
Password: pwd

You’ll see a message that indicates successful login. Now come back and enter a wrong pair of credentials– you’ll see the message Invalid username or password.

We can confirm that the script works right. The next job is to try our hand with SQL injection to bypass the login page. Enter the following for the usernamefield:

' or true -- 

Note that there should be a space after -- this injection won’t work without that space. Leave the password field empty and hit the login button.

Voila! The script shows the message meant for authenticated users.

Setting Up Rules


To make your life easier, there are a lot of rules which are already installed along with mod_security. These are called CRS (Core Rule Set) and are located in

root@droplet:~# ls -l /usr/share/modsecurity-crs/
total 40
drwxr-xr-x 2 root root  4096 Oct 20 09:45 activated_rules
drwxr-xr-x 2 root root  4096 Oct 20 09:45 base_rules
drwxr-xr-x 2 root root  4096 Oct 20 09:45 experimental_rules
drwxr-xr-x 2 root root  4096 Oct 20 09:45 lua
-rw-r--r-- 1 root root 13544 Jul  2  2012 modsecurity_crs_10_setup.conf
drwxr-xr-x 2 root root  4096 Oct 20 09:45 optional_rules
drwxr-xr-x 3 root root  4096 Oct 20 09:45 util

The documentation is available at

root@droplet1:~# ls -l /usr/share/doc/modsecurity-crs/
total 40
-rw-r--r-- 1 root root   469 Jul  2  2012 changelog.Debian.gz
-rw-r--r-- 1 root root 12387 Jun 18  2012 changelog.gz
-rw-r--r-- 1 root root  1297 Jul  2  2012 copyright
drwxr-xr-x 3 root root  4096 Oct 20 09:45 examples
-rw-r--r-- 1 root root  1138 Mar 16  2012 README.Debian
-rw-r--r-- 1 root root  6495 Mar 16  2012 README.gz

To load these rules, we need to tell Apache to look into these directories. Edit the mod-security.conf file.

nano /etc/apache2/mods-enabled/mod-security.conf

Add the following directives inside <IfModule security2_module> </IfModule>:

Include "/usr/share/modsecurity-crs/*.conf"
Include "/usr/share/modsecurity-crs/activated_rules/*.conf"

The activated_rules directory is similar to Apache’s mods-enabled directory. The rules are available in directories:

/usr/share/modsecurity-crs/base_rules
/usr/share/modsecurity-crs/optional_rules
/usr/share/modsecurity-crs/experimental_rules

Symlinks must be created inside the activated_rules directory to activate these. Let us activate the SQL injection rules.

cd /usr/share/modsecurity-crs/activated_rules/
ln -s /usr/share/modsecurity-crs/base_rules/modsecurity_crs_41_sql_injection_attacks.conf .

Apache has to be reloaded for the rules to take effect.

service apache2 reload

Now open the login page we created earlier and try using the SQL injection query on the username field. If you had changed the SecRuleEngine directive toOn, you’ll see a 403 Forbidden error. If it was left to the DetectionOnly option, the injection will be successful but the attempt would be logged in the modsec_audit.log file.

Writing Your Own mod_security Rules


In this section, we’ll create a rule chain which blocks the request if certain “spammy” words are entered in a HTML form. First, we’ll create a PHP script which gets the input from a textbox and displays it back to the user.

/var/www/form.php

<html>
    <body>
        <?php
            if(isset($_POST['data']))
                echo $_POST['data'];
            else
            {
        ?>
                <form method="post" action="">
                        Enter something here:<textarea name="data"></textarea>
                        <input type="submit"/>
                </form>
        <?php
            }
        ?>
    </body>
</html>

Custom rules can be added to any of the configuration files or placed in modsecurity directories. We’ll place our rules in a separate new file.

nano /etc/modsecurity/modsecurity_custom_rules.conf

Add the following to this file:

SecRule REQUEST_FILENAME "form.php" "id:'400001',chain,deny,log,msg:'Spam detected'"
SecRule REQUEST_METHOD "POST" chain
SecRule REQUEST_BODY "@rx (?i:(pills|insurance|rolex))"

Save the file and reload Apache. Open http://yourwebsite.com/form.php in the browser and enter text containing any of these words: pills, insurance, rolex.

You’ll either see a 403 page and a log entry or only a log entry based on SecRuleEngine setting. The syntax for SecRule is

SecRule VARIABLES OPERATOR [ACTIONS]

Here we used the chain action to match variables REQUEST_FILENAME withform.php, REQUEST_METHOD with POST and REQUEST_BODY with the regular expression (@rx) string (pills|insurance|rolex). The ?i: does a case insensitive match. On a successful match of all these three rules, the ACTIONis to deny and log with the msg “Spam detected.” The chain action simulates the logical AND to match all the three rules.

Excluding Hosts and Directories


Sometimes it makes sense to exclude a particular directory or a domain name if it is running an application like phpMyAdmin as modsecurity and will block SQL queries. It is also better to exclude admin backends of CMS applications like WordPress.

To disable modsecurity for a complete VirtualHost place the following

<IfModule security2_module>
    SecRuleEngine Off
</IfModule>

inside the <VirtualHost> section.

For a particular directory:

<Directory "/var/www/wp-admin">
    <IfModule security2_module>
        SecRuleEngine Off
    </IfModule>
</Directory>

If you don’t want to completely disable modsecurity, use the SecRuleRemoveById directive to remove a particular rule or rule chain by specifying its ID.

<LocationMatch "/wp-admin/update.php">
    <IfModule security2_module>
        SecRuleRemoveById 981173
    </IfModule>
</LocationMatch>

Further Reading


Official modsecurity documentationhttps://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual

 

 

The post How To Set Up mod_security with Apache on Debian/Ubuntu appeared first on Shine Servers.

How To Increase Page Load Speed with Apache mod_deflate

$
0
0

Apache’s mod_deflate is an Apache module that will compress output from your server before it is sent to the client. If you have newer version of Apache the mod_deflate module is probably loaded by default, but it may not be turned on. To check if compression is enabled on your site, first verify that the module is loaded in your httpd.conf file:

LoadModule deflate_module modules/mod_deflate.so

Then you can use to following web based tool to verify compression:

http://www.whatsmyip.org/http-compression-test/

For my server, CentOS 6.x, the module was loaded by default but compression was not on until I set up the configuration file. You can place your compression configurations into your httpd.conf file, an .htaccess file, or a .conf file in your httpd/conf.d directory. My base configuration file is as follows:

<IfModule mod_deflate.c>
    AddOutputFilterByType DEFLATE text/html
    AddOutputFilterByType DEFLATE text/plain 
    AddOutputFilterByType DEFLATE text/css 
    AddOutputFilterByType DEFLATE text/javascript
    AddOutputFilterByType DEFLATE text/xml
</IfModule>

The configuration file specifies that all the html, plain, css, and javascript text files should be compressed before being sent back to the client. When writing your configuration file, you don’t want to compress the images because the images are already compressed using their own specific algorithms and doubling compression just wastes CPU. Depending on the server you are running, you may want a more comprehensive compression schema based on different file types and browsers. More information can be found in the below referenced Apache docs.

Another thing to consider is that while the gzip compression algorithm is fast and efficient for smaller text files, it can be cumbersome on your CPU when trying to compress larger files. Be wary when adding compression to non text files > 50 KB.

When you examine the HTTP headers of your server’s response, you will see the following headers for compressed content:

Content-Encoding: gzip
Vary: Accept-Encoding

Here is another default configuration file taken from Ubuntu 12.10:

<IfModule mod_deflate.c>
    # these are known to be safe with MSIE 6
    AddOutputFilterByType DEFLATE text/html text/plain text/xml    # everything else may cause problems with MSIE 6
    AddOutputFilterByType DEFLATE text/css
    AddOutputFilterByType DEFLATE application/x-javascript application/javascript 
    AddOutputFilterByType DEFLATE application/ecmascript
    AddOutputFilterByType DEFLATE application/rss+xml
</IfModule>

Reference
http://httpd.apache.org/docs/2.2/mod/mod_deflate.html

 

The post How To Increase Page Load Speed with Apache mod_deflate appeared first on Shine Servers.

How To Increase Page Load Speed with Apache KeepAlive

$
0
0

The KeepAlive directive for Apache allows a single request to download multiple files. So on a typical page load, the client may need to download HTML, CSS, JS, and images. When KeepAlive is set to “On”, all of these files can be downloaded in a single request. If KeepAlive is set to “Off”, each file download would require it’s own request.

You can control how many files can be downloaded in a single request with the MaxKeepAliveRequests directive, which defaults to 100. If you have pages with a lot of different files, consider putting this higher so that your pages will load in a single request.

One thing to be cautious of when using KeepAlive, is the connections will remain open waiting for new requests once the connection is established. This can use up a lot of memory, as the processes sitting idly will be consuming RAM. You can help avoid this with the KeepAliveTimeout directive, which specifies how long the connections remain open. I generally put this below 5, depending on the average load times of my site.

An important factor when deciding to use KeepAlive is the CPU vs. RAM usage requirements for your server. Having KeepAlive On will consume less CPU as the files are served in a single request, but will use more RAM because the processes will sit idly. Here is an example of KeepAlive settings I use:

KeepAlive             On
MaxKeepAliveRequests  50
KeepAliveTimeOut      3

Once KeepAlive is on you can see the following header in your server’s response:

Connection:  Keep-Alive


The post How To Increase Page Load Speed with Apache KeepAlive appeared first on Shine Servers.


Ten SEO Mistakes Made on Database Driven Websites

$
0
0

Search engine friendly websites is one of those often heard phrases, both from web site development companies and from their clients. Everyone knows that this is important to have, and yet it is one of the things that is actually often overlooked.

Search engine optimisation companies actually spend a lot of their time analysing a website and removing barriers to the search engines ranking a site highly. At the web development level, it is possible to build a site that is perfectly search engine friendly. One of the hardest types of sites to get right though are database driven websites. Listed below are ten of the most common issues that are created, often unknowingly, in the development process of a dynamically generated web site.

1. Pages with duplicate content - not enough differential areas within the pages, so that only small areas of the page change from page to page. It is essential that enough of the page text changes for the search engines to see an appreciable difference between one page and the next.

2. Pages with duplicate page titles - the page title is a great indicator to the search engines of the primary content of the page. Whilst this is often unique on sites such as e-commerce websites, it is often overlooked in other sites, particularly where small areas of the site are generated from a database, such as news pages.

3. Pages with duplicate meta descriptions - again, this is easy to overlook and set a global or category level meta description. These give the search engines a reason to penalise your site for not giving them enough information, and again, creating a unique meta description for every page is an essential SEO task.

4. Using auto-generation of pages as a shortcut instead of creating good content. This is linked quite closely to point 1, where it is possible to create pages that have only a tiny percentage difference between them. Databases are fantastic ways of storing information, but you still need to put the work in to fill them with content. Unique information about the subject of the page will immensely help both the long tail and the ability of the search engines to determine that a page is valuable.

5. Creating pages that are hidden behind form submissions or javascript postbacks that cannot be accessed by a search engine crawler. This is far more common that is generally realised. For instance .NET creates postback links by default instead of proper links – potentially making huge sections of a site unreachable. Likewise, it is easy to hide lovely content rich areas of your site behind a drop down selector in a form that means certain areas of the site are not visible.

6. Too many query strings - this is a common bugbear of the professional SEO, where complicated database selections create deep levels of pages, but with seven or eight &id= type strings. Additionally, some bad development methodology can leave pages with null query strings that appear in every URL but don’t do anything. The answer to this is generally URL rewrites, creating much more search engine friendly and user-friendly URLs!

7. Putting query strings in different orders when accessed through different places – this can create duplicate content issues, which can cause major penalties.

8. Not using user language to generate automated pages – if you are going to create a database driven website that uses words in the query strings (or better in rewritten URLs) make sure that you use words that will help you with SEO – if you sell widgets, make sure you are using the word widgets somewhere in the URL instead of just product= or id= – keyword research can assist with this.

9. Not allowing the meta data and title to be edited easily after the site build. It is possible to hardcode the generation of meta information into a database that doesn’t allow it to be edited later. Creating a mechanism for modifying this information initially helps everyone at a later stage when the information needs changing without shoehorning it into an already developed structure.

10. Creating keyword stuffed pages by using auto-generation. Once upon a time, search engines quite liked pages with high densities of your keywords, but now these are likely to get you marked down rather than up. So be aware when creating pages that long pages with lots of your products on can create too high a density. For instance listing blue widgets, light blue widgets, navy blue widgets, sky blue widgets is going to create a page with a very dense page for the phrase “blue widgets”.

These are just 10 of the most common potential optimisation pitfalls when creating dynamic websites. There are many more facets to producing a great database driven site, including user friendliness, speed, performance and security, but they all add together to make the best solution to your needs.

About the Author: Mark Stubbs is a freelance writer who specialises in internet marketing and web site development. For more information on database driven websites he suggests that you visit www.obs-group.co.uk.

Source

The post Ten SEO Mistakes Made on Database Driven Websites appeared first on Shine Servers.

Kloxo Useful SSH Commands

$
0
0

For those of you who have opted for the free Kloxo control panel on your VPS or dedicated server, here
are some common, and simple, commands you can use to make your life easier.

As with anything free, Kloxo has a few bugs any sys admin will run into. Even the big commercial packages, like cPanel, have annoying bugs.
Rather than banging your head against your keyboard for hours on end, I have complied a list of the most commonly used, and simple, command line fixes for Kloxo.

Websites fail to load:
/script/fixweb

DNS Fails to resolve:
/script/fixdns

Horde Mail gives 500 internal server errors:
/script/fixhorde

Email, in general, just does not function:
/script/fixmail

Kloxo fails to load, don’t reboot, just run:
/script/restart

While these are the most common, there are also many more /script commands.
Be careful with what you run from /script – some can have nasty consequences, if not used properly.
If in doubt, please visit the support forums at http://lxcenter.org. Always log into your VePortal and create a
backup of your VPS container, before running any system wide command you are not familiar with. Allow time
for the backup to complete, before running the commands.

The post Kloxo Useful SSH Commands appeared first on Shine Servers.

How To Set Up Clustered Nameservers With cPanel

$
0
0

As important as DNS is to web hosting, it is a good idea to make it redundant when possible. If you have two or more cPanel servers, you can use cPanel’s DNS clustering to lower the risk of a DNS failure on a nameserver taking down all of your sites. Here’s how to set that up:

Step One: Enable Clustering For Each Server

First, click over to Configure Cluster in WHM on each server. In the Modify Cluster Status box, select Enable DNS clustering. Click the Change button.

Step Two: Configure The Primary Nameserver

On the first server, scroll down to Add a new server to the cluster. The type will be cpanel. Click Configure. This will take you to the cPanel DNS Remote Configuration page.
In Remote cPanel & WHM DNS service, put the hostname or IP address of the second nameserver. Next, in Remote server username, put the username of the nameserver. While this can sometimes be reseller, in most cases it will be root.

In the next area, Remote server access hash, you will need to put the ssh public key of the other server. To find that key, go to the Manage root’s SSH Keys page in the second server’s WHM. Click Generate a New Key. On the next page, leave the password blank and click the Generate Key button. cPanel will issue a warning about the security of an SSH key without a password, but unfortunately it is needed for this sort of automation. (It is only a security risk if someone gains root access to your server, by which point your server’s security will already have been compromised.)

Still on the second server, click back to Manage root’s SSH Keys. Then click View/Download Key under the Public Keys: heading. This will take you to the key which you will then copy back to the first server, in the Remote server access hash field.

Uncheck the Setup Reverse Trust Relationship checkbox.

Set the DNS role of the server to Write-only. Click Submit.

Step Three: Repeat Step Two, Only Backwards

Step Three is going through the same process as Step Two, only reversing the servers. Also, role of the server should be set to Standalone instead of Write-Only.

Adding DNS Zones

There is one quirk of this system: DNS zones for domains will have to be added on the Write-Only server. So when creating cPanel accounts on the Standalone server, make sure to add the DNS for the domain to the Write-Only server.

===

The post How To Set Up Clustered Nameservers With cPanel appeared first on Shine Servers.

Shine Servers LLP empowers students offers free & discounted hosting

$
0
0

Shine Servers LOGO

Shine Servers LLP is excited to announce free Shared Web Hosting & 30% Off On Dedicated Servers For Lifetime.

Dear Customers,

Shine Servers LLP offers a wide range of Web Hosting Services including Self-Managed and Managed Dedicated Servers, Shared Web Hosting, Reseller Solutions, Control Panels, SSL Certificates and Domain Registration. If you require a presence on the web, we have a solution to suit your needs.

Between climbing tuition, high cost of living and low employment, college students are always living on a tight budget. Even at my older times I as a student expected something extra from every store i visit, maybe some sort of extra discount or it can be a free Coffee at Starbucks but it’s always feels quite pleasant while having these sort of freebies when on a fixed budget.

Therefore, We are excited to announce that Shine Servers is now providing “Free Shared Web Hosting” & “30% Discount On Dedicated Servers” for LIFETIME (Only For STUDENTS).

Yes, that’s right & Effective Immediately !

Features Of Free Shared Web Hosting :

>> 24/7 Technical Support (Works As Expected)
>> 99.9% Uptime Guarantee
>> Pure Raid-10 Drive Storage
>> Latest cPanel/WHM with CloudLinux
>> NGINX Web Server
>> Multiple PHP Versions
>> Softaculous One-Click Installer
>> CloudFlare CDN Plugin
>> PHP, CGI, Perl, JavaScript, SSI & MySQL Support
Free Plan Configuration :

5 GB Performance Raid-10 Drive Storage
500GB Premium Bandwidth
Latest cPanel
One-Click Installer Softaculous
CloudFlare CDN Plugin
3 Add-On Domains
Unlimited Sub Domains
Unlimited Parked Domains
Unlimited MySQL Databases
Unlimited FTP Accounts
Unlimited E-Mail Accounts
Unlimited Forwarders
Unlimited Auto Responders
SSH Access
Ruby On Rails
Perl, CGI, Python, cURL, GD2, ionCube PHP Loader, phpMyAdmin
Activation Time: Instant

How To Order a Free Shared Web Hosting:

1. To Avail the Free Shared Hosting, Click Here To Order
2. Once Ordered, Do not pay the invoice that has been generated while order processing instead Raise a ticket at our Client Area by attaching your Student ID Card.
3. Once your ID is “Verified” then we will instantly setup your hosting.

How To Order a Discounted Dedicated Server:

Kindly use StudentShineDEDICATED Coupon Code To avail 30% Recurring Discount on any Dedicated Server listed here
If anyone is facing any issues, please feel free to raise a ticket at our Customer Support or Email Us directly.

Suggestions are most welcome!

Thank You!
Regards
Bharat Vashist

The post Shine Servers LLP empowers students offers free & discounted hosting appeared first on Shine Servers.

WordPress 4.1.2 Released – Critical Security Update

How You Can Help In Nepal Relief Effort

$
0
0

As a CEO & Founder at Shine Servers LLP, i’ll be arranging some donation for supporting Nepal Relief Effort for the people effected by #NepalEarthquake tomorrow. We are not a heavy earning company but as a #Entreprneurs we can always donate a small fraction of our earnings for the people who are fighting for their survival with the Natural Calamities.

Uday Foundation​ is sending medicines and essential supplies to Nepal and is working with local organizations to ensure their urgent distribution. Medical camps will also be organised soon.

Urgent Relief Material

Dry Ration
Tents
Matches and Candles
Tarpaulins and thick plastic sheets
Blankets and Sleeping Bags
Feeding bottles
Baby Food
Sanitary napkins
Essential Medicine
Feeding bottles

Financial Assistance Details

For Online Transfers:

Bank name: HDFC Bank
Branch: Anand Niketan, New Delhi 110021
Account name: UDAY FOUNDATION FOR CDRBG TRUST
Type: Savings
A/c No. 03361450000251
IFSC Code HDFC0000336

Cheques can be made in favour of “UDAY FOUNDATION FOR CDRBG TRUST” and sent at following address:

Uday Foundation,
113A/1, (Near Govardhan Resturant), Adhchini, Sri Aurobindo Marg,
New Delhi 110017 Phone: 011-26561333/444

Since Uday Foundation is not registered with FCRA so they cannot accept foreign donations, they accepts donations from Non-Resident Indians only through any bank account operational in India. Contributions made by a citizen of India living in another country, from his personal savings, through the normal banking channels, is not treated as foreign contribution.

Please share the details after you have made the donation to help@udayfoundationindia.org, along with your complete address and PAN card no, enabling us to send a 80G tax exemption receipt of the same.

Drop-off Location

Uday Foundation
113A, Sri Aurobindo Marg, New Delhi 110017
Phone : 011-26561333/444, Mobile : 9868125819
Email : info@udayfoundationindia.org

 

Note: This information has been provided/published on a good faith basis, without any commercial motive. Shine Servers LLP does not vouch for the authenticity of the claims made by the intending donee, nor can we guarantee that the donations made by a donor will be used for the purpose as stated by the intending donee. You are requested to independently verify the contact information and other details before making a donation. Shine Servers LLP and/or its employees will not be responsible for the same.

 

Highest Regards,
Bharat Vashist

CEO & Founder
Shine Servers LLP || Leaders In Servers
www.shineservers.com || www.shineservers.in
……………………………………………………………………………………………………..
twitter: @shineservers || Facebook : https://www.facebook.com/ShineServers

The post How You Can Help In Nepal Relief Effort appeared first on Shine Servers.

How CloudFlare Increases Speed And Security Of Your Site

$
0
0

ShineServers.Com

CloudFlare, a web performance and security company, is excited to announce our partnership with SHINE SERVERS LLP ! If you haven’t heard about CloudFlare before, our value proposition is simple: we’ll make any website twice as fast and protect it from a broad range of web threats.

Today, hundreds of thousands of websites—ranging from individual blogs to e-commerce sites to the websites of Fortune 500 companies to national governments—use CloudFlare to make their sites faster and more secure. We power more than 65 billion monthly page views—more than Amazon, Wikipedia, Twitter, Zynga, AOL, Apple, Bing, eBay, PayPal and Instagram combined—and over 25% of the Internet’s population regularly passes through our network.

Faster web performance

CloudFlare is designed to take a great hosting platform like SHINE SERVERS LLP and make it even better.

We run 23 data centers [link to http://www.cloudflare.com/network-map] strategically located around the world. When you sign up for CloudFlare, we begin routing your traffic to the nearest data center.

As your traffic passes through the data centers, we intelligently determine what parts of your website are static versus dynamic. The static portions are cached on our servers for a short period of time, typically less than 2 hours before we check to see if they’ve been updated. By automatically moving the static parts of your site closer to your visitors, the overall performance of your site improves significantly.

CloudFlare’s intelligent caching system also means you save bandwidth, which means saving money, and decreases the load on your servers, which means your web application will run faster and more efficiently than ever. On average, CloudFlare customers see a 60% decrease in bandwidth usage, and a 65% in total requests to their servers. The overall effect is that CloudFlare will typically cut the load time for pages on your site by 50% which means higher engagement and happier visitors.

Broad web security

Over the course of 2011, CloudFlare identified a 700% increase in the number of distributed denial of service attacks [link to http://blog.cloudflare.com/2011-the-year-of-the-ddos] (DDoS) we track on the Internet (see the chart below). As attacks like these increase, CloudFlare is stepping up to protect sites.

CloudFlare’s security protections offer a broad range of protections [link to http://www.cloudflare.com/features-security] against attacks such as DDoS, hacking or spam submitted to a blog or comment form. What is powerful about our approach is that the system gets smarter the more sites that are part of the CloudFlare community. We analyze the traffic patterns of hundreds of millions of visitors in real time and adapt the security systems to ensure good traffic gets through and bad traffic is stopped.

In time, our goal is nothing short of making attacks against websites a relic of history. And, given our scale and the billions of different attacks we see and adapt to every year, we’re well on our way to achieving that for sites on the CloudFlare network.

Signing up

Any website can deploy CloudFlare, regardless of your underlying platform. By integrating closely with SHINE SERVERS LLP, we make the process of setting up CloudFlare “1 click easy” through your existing SHINE SERVERS LLP [control panel] dashboard. Just look for the CloudFlare icon, choose the domain you want to enable, and click the orange cloud. That’s it!

We’ve kept the price as low as possible and plans offered through SHINE SERVERS LLP are free. Moreover, we never charge you for bandwidth or storage, therefore saving you tons via reduced bandwidth costs.

For site owners who would like to take advantage of CloudFlare’s advanced offerings, we also offer a ‘Pro’ tier of service for $20/month [link to http://www.cloudflare.com/plans]. The ‘Pro’ tier includes all of the ‘Free’ tier’s offerings, as well as extra features like SSL, full web application firewall and faster analytics.

We’re proud that every day more than a thousand new sites, including some of the largest on the web, join the CloudFlare community. If you’re looking for a faster, safer website, you’ve got a good start with SHINE SERVERS LLP, but the next step is to join the CloudFlare community.

The post How CloudFlare Increases Speed And Security Of Your Site appeared first on Shine Servers.


How To Back Up Your MySQL Databases

$
0
0

MySQL is an open source relational database management system (DBMS) which is frequently deployed in a wide assortment of contexts. Most frequently it is deployed as part of the LAMP Stack. The database system is also easy to use and highly portable and is, in the context of many applications, extremely efficient. As MySQL is often a centralized data store for large amounts of mission critical data, making regular backups of your MySQL database is one of the most important disaster recovery tasks a system administrator can perform. This guide addresses a number of distinct methods for creating back ups of your database as well as restoring databases from backups.

 

Backup Methodology

Most backups of MySQL databases in this guide are performed using the mysqldump tool, which is distributed with the default MySQL server installation. We recommend that you use mysqldumpwhenever possible because it is often the easiest and most efficient way to take database backups. Other methods detailed in this guide are provided for situations when you do not have access to the mysqldump tool, as in a recovery environment like Finnix or in situations where the local instance of the MySQL server will not start.

Nevertheless, this guide provides a mere overview of the mysqldump tool, as there are many options for and uses of mysqldump that fall beyond the scope of this document. We encourage you to become familiar with all of the procedures covered in this document, and to continue your exploration of mysqldump beyond the cases described here. Be sure to note the following:

  • The *.sql files created with mysqldump can be restored at any time. You can even edit the database .sql files manually (with great care!) using your favorite text editor.
  • If your databases only make use of the MyISAM storage engine, you can substitute the mysqldump command with the faster mysqlhotcopy.

 

Creating Backups of the Entire Database Management System (DBMS)

It is often necessary to take a back up (or “dump”) of an entire database management system along with all databases and tables, including the system databases which hold the users, permissions and passwords.

Option 1: Create Backups of an Entire Database Management System Using the mysqldump Utility

The most straight forward method for creating a single coherent backup of the entire MySQL database management system uses the mysqldump utility from the command line. The syntax for creating a database dump with a current timestamp is as follows:

1
mysqldump --all-databases > dump-$( date '+%Y-%m-%d_%H-%M-%S' ).sql -u root -p

This command will prompt you for a password before beginning the database backup in the current directory. This process can take anywhere from a few seconds to a few hours depending on the size of your databases.

Automate this process by adding a line to crontab:

1
0 1 * * * /usr/bin/mysqldump --all-databases > dump-$( date '+%Y-%m-%d_%H-%M-%S' ).sql -u root -pPASSWORD

For the example above, use which mysqldump to confirm the correct path to the command, and replace root with the mysql user you would like to run backups as, and PASSWORD with the correct password for that user.

In the crontab example, ensure that there is no space between the -P flag, and your password entry.

Option 2: Create Backups of an Entire DBMS Using Copies of the MySQL Data Directory

While the mysqldump tool is the preferred backup method, there are a couple of cases that require a different approach. mysqldump only works when the database server is accessible and running. If the database cannot be started or the host system is inaccessible, we can copy MySQL’s database directly. This method is often necessary in situations where you only have access to a recovery environment like Finnix with your system’s disks mounted in that file system. If you’re attempting this method on your system itself, ensure that the database is not running. Issue a command that resembles the following:

1
/etc/init.d/mysqld stop

On most distribution’s version of MySQL, the data directory is located in the /var/lib/mysql/directory. If this directory doesn’t exist examine the /etc/mysql/my.cnf file for a path to the data directory. Alternatively, you can search your file system for the data directory by issuing the following command:

1
find / -name mysql

Once you have located your MySQL data directory you can copy it to a backup location. The following example assumes that the MySQL data directory is located at /var/lib/mysql/:

1
cp -R /var/lib/mysql/* /opt/database/backup-1266871069/

In this case, we have recursively copied the contents of the data directory (e.g. /var/lib/mysql/) to a directory within the /opt/ hierarchy (e.g. /opt/database/backup-1266871069/). This directory must exist before initiating the copy operation. Consider the following sequence of operations:

1
2
3
/etc/init.d/mysql stop
mkdir -p /opt/database/backup-1266872202/
cp -R /var/lib/mysql/* /opt/database/backup-1266872202/

These commands begin by stopping the MySQL server daemon, then creating a directory named /opt/database/backup-1266872202/, and performing a recursive copy of the data directory. Note that we’ve chosen to use the backup-[time_t] naming convention for our examples. Substitute the paths’ above for your preferred organization and naming scheme. The cp command does not produce output and can take some time to complete depending on the size of your database. Do not be alarmed if it takes a while to complete. When the copy operation is finished, you may want to archive the data directory into a “tar” archive to make it easier to manage and move between machines. Issue the following commands to create the archive:

1
2
cd /opt/database/backup-1266872202
tar -czfv * > /opt/mysqlBackup-1266872202.tar.gz 

Once the tarball is created, you can easily transfer the file in the manner that is most convenient for you. Don’t forget to restart the MySQL server daemon again if needed:

1
/etc/init.d/mysql start

Creating Backups of a Single Database

In many cases, creating a back up of the entire database server isn’t required. In some cases such as upgrading a web application, the installer may recommend making a backup of the database in case the upgrade adversely affects the database. Similarly, if you want to create a “dump” of a specific database to move that database to a different server, you might consider the following method.

When possible, use the mysqldump tool to export a “dump” of a single database. This command will resemble the following:

1
mysqldump -u squire -ps3cr1t -h localhost danceLeaders > 1266861650-danceLeaders.sql

The above example is like the example in the previous section, except rather than using the --all-databases option, this example specifies a particular database name. In this case we create a back up of the danceLeaders database. The form of this command, in a more plain notation is as follows:

1
mysqldump -u [username] -p[password] -h [host] [databaseName] > [backup-name].sql

For an additional example, we will backup the database named customer using the root database account by issuing the following command:

1
mysqldump -u root -p -h localhost customer > customerBackup.sql

You will be prompted for a password before mysqldump begins it’s backup process. As always the backup file, in this case customerBackup.sql, is created in the directory where you issue this command. The mysqldump command can complete in a few seconds or a few hours depending on the size of the database and the load on the host when running the backup.

Creating Backups of a Single Table

Option 1: Create Backups of a Single Table Using the mysqldump Utility

This operation, like previous uses of the mysqldump utility in this document, allows you to create a backup of a single database table. Continuing our earlier examples the following example allows you to back up the table squireRecords in the danceLeaders database.

1
 mysqldump -u squire -ps3cr1t -h localhost danceLeaders squireRecords \> 1266861650-danceLeaders-squireRecords.sql

The above example is like the example in the previous section, except that we’ve added a table name specification to the command to specify the name of the table that we want to back up. The form of this command in a more plain notation is as follows:

1
mysqldump -u [username] -p[password] -h [host] [databaseName] [tableName] > [backup-name].sql

For an additional example, we will backup the table named “order” from the database named customer using the root database account by issuing the following command:

1
mysqldump -u root -p -h localhost customer order > customerBackup-order.sql

You will be prompted for a password before mysqldump begins its backup process. As always, the backup file (in this case customerBackup.sql) is created in the directory where you issue this command. The mysqldump command can complete in a few seconds or a few hours depending on the size of the database and the load on the host when running the backup.

Option 2: Create Backups of a Single Table Using the MySQL Client and an OUTFILE Statement

The MySQL client itself has some backup capability. It is useful when you are already logged in and you do not want to exit the current session. If you are using a live system and cannot afford down time, you should consider temporarily locking the table you’re backing up.

Do be aware that when backing up a single table using the MySQL client, that table’s structure is not maintained in the backup. Only the data itself is saved when using this method.

  1. Before we begin, we recommend performing a LOCK TABLES on the tables you intend to backup up, followed by FLUSH TABLES to ensure that the database is in a consistent space during the backup operation. You only need a read lock. This allows other clients to continue to query the tables while you are making a copy of the files in the MySQL data directory. For a “read” lock, the syntax of LOCK TABLES looks like the following:
    1
    LOCK TABLES tableName READ;
    

    To perform a LOCK TABLES on the order table of the customer database, issue the following command:

    1
    mysql -u root -p -h localhost
    

    You will then be prompted for the root password. Once you have entered the database credentials, you will arrive at the mysql client prompt. Issue the following command to lock the order table in the customer database (the trailing ; is required for MySQL commands):

    1
    2
    3
    USE customer;
    LOCK TABLES order READ;
    FLUSH TABLES;
    
  2. We can now begin the backup operation. To create a backup of a single table using the MySQL client, you will need to be logged in to your MySQL DBMS. If you are not currently logged in you may log in with the following command:
    1
     mysql -u root -p -h localhost
    

    You will be prompted for a password. Once you have entered the correct password and are at the MySQL client prompt, you can use a SELECT * INTO OUTFILE statement. The syntax of this statement looks like the following:

    1
     SELECT * INTO OUTFILE 'file_name' FROM tbl_name;
    

    In this example, we will create a backup of the data from the order table of the customerdatabase. Issue the following command to begin the backup procedure (the trailing ; is required for MySQL commands):

    1
    2
    3
    4
    5
     USE customer;
     LOCK TABLES order READ;
     FLUSH TABLES;
     SELECT * INTO OUTFILE 'customerOrderBackup.sql' FROM order;
     UNLOCK TABLES;
    

    The customerOrderBackup.sql file will be created in the appropriate data sub-directory within MySQLs data directory. The MySQL data directory is commonly /var/lib/mysql/. In this example, the OUTFILE will be /var/lib/mysql/customer/customerOrderBackup.sql. The location of this directory and file can, however, vary between Linux distributions. If you can not find your backup file, you can search for it with the following command:

    1
    find / -name customerOrderBackup.sql
    
  3. Once you have completed the backup operation, you will want to unlock the tables using the following command in the MySQL client. This will return your database to its normal operation. Log in to the MySQL client with the first command if you are not presently logged in and then issue the second command:
    1
    2
    3
    mysql -uroot -p -h localhost
    
    UNLOCK TABLES;
    

You can continue using your database as normal from this point.

Considerations for an Effective Backup Strategy

Creating backups of your MySQL database should be a regular and scheduled task. You might like to consider scheduling periodic backups using cronmysqldump and/or mail. Consider our documentation for more information regarding cron. Implementing an automated backup solution may help minimize down time in a disaster recovery situation.

You do not need to log in as root when backing up databases. A MySQL user with read (e.g. SELECT) permission is able to use both the mysqldump and mysql (e.g. the MySQL client) tools to take backups, as described below. As a matter of common practice, we recommend that you not use the MySQL root user whenever possible to minimize security risks.

You may want to consider incremental backups as part of a long-term database backup plan. While this process is not covered here, we recommend that you consider the MySQL Database Backup Methods resource for more information.

Restoring an Entire DBMS From Backup

A backup that cannot be restored is of minimal value. We recommend testing your backups regularly to ensure that they can be restored in the event that you may need to restore from backups. When using restoring backups of your MySQL database, the method you use depends on the method you used to create the backup in question.

Option 1: Restoring an Entire DBMS Using the MySQL Client and Backups Created by mysqldump

Before beginning the restoration process, this section assumes your system is running a newly installed version of MySQL without any existing databases or tables. If you already have databases and tables in your MySQL DBMS, please make a backup before proceeding as this process will overwrite current MySQL data.

You can easily restore your entire DBMS using the mysql command. The syntax for this will resemble the following:

1
mysql -u [username] -p [password] < backupFile.sql

In this case we’re simply restoring the entire DBMS. The command will look like the following:

1
mysql -u root -p < 1266861650-backup-all.sql

You will be prompted for the root MySQL user’s password. Once the correct credentials are supplied, the restoration process will begin. Since this process restores an entire DBMS, it can take anywhere from a few seconds to many hours.

Option 2: Restoring an Entire DBMS Using MySQL Data Files Copied Directly from MySQL’s Data Directory

Before beginning the restoration process, this section assumes your system is running a newly installed version of MySQL without any existing databases or tables. If you already have databases and tables in your MySQL DBMS, please make a backup before proceeding as this process will overwrite current MySQL data.

  1. If you have a complete backup of your MySQL data directory (commonly /var/lib/mysql), you can restore it from the command line. To ensure a successful restore, you must first stop the MySQL server daemon and delete the current data in the MySQL data directory.

    /etc/init.d/mysql stop rm -R /var/lib/mysql/*

  2. In the following example, the MySQL data directory backup is located in the /opt/database/backup-1266872202 directory. If you made a tarball of the data directory when you backed up your DBMS data directory, you will need to extract the files from the tarball before copying with the following commands:

    cp mysqlBackup-1266872202.tar.gz /var/lib/mysql/ cd /var/lib/mysql tar xzvf mysqlBackup-1266872202.tar.gz

  3. Before we can restart the MySQL database process, we must ensure that the permissions are set correctly on the /var/lib/mysql/ directory. For this example, we assume the MySQL server daemon runs as the user mysql with the group mysql. To change the permissions on the data directory issue the following command:

    chown -R mysql:mysql /var/lib/mysql

  4. Alter the mysql:mysql portion of this command if your MySQL instance runs with different user and group permissions. The form of this argument is [user]:[group]. Finally we can start the MySQL server daemon with the following command:
    1
     /etc/init.d/mysql start
    

    If you receive an error similar to the following:

    1
    2
     /usr/bin/mysqladmin: connect to server at 'localhost' failed
         error: 'Access denied for user 'debian-sys-maint'@'localhost' (using password: YES)'
    

    You’ll need to find the old debian-sys-maint user’s password in the /etc/mysql/debian.cnf and then change the new debian-sys-maint user’s password to it. You can view the old password using cat:

    1
     cat /etc/mysql/debian.cnf | grep password
    

    Copy (or remember) the password. Then you’ll need to change the new debian-sys-maint user’s password. You can do this by logging in as the MySQL root user and issuing the following command (where <password> is the password of the old debian-sys-maint user):

    1
     GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY '<password>' WITH GRANT OPTION;
    
  5. You’ll then need to restart MySQL with the following command:
    1
    /etc/init.d/mysql restart
    

After MySQL server has successfully started, you will want to test your MySQL DBMS and ensure that all databases and tables restored properly. We also recommend that you audit your logs for potential errors. In some cases MySQL can start successfully despite database errors.

Restoring a Single Database from Backup

In cases where you have only created a backup for one database, or only need to restore a single database, the restoration process is somewhat different.

Before beginning the restoration process, this section assumes your system is running a newly installed version of MySQL without any existing databases or tables. If you already have databases and tables in your MySQL DBMS, please make a backup before proceeding as this process will overwrite current MySQL data.

  1. To restore a single database using the mysql command, first prepare the destination database. Log in to your (new) MySQL database server using the MySQL client:
    1
     mysql -u root -p -h localhost
    
  2. You will be prompted for the root MySQL user’s password. After you have provided the correct credentials, you must create the destination database. In this case, the customer database will be restored:
    1
     CREATE DATABASE customer;
    
  3. As with all MySQL statements, do not omit the final semi-colon (e.g. ;) at the conclusion of each command. Depending on your deployment, you may need to create a new MySQL user or recreate a previous user with access to the newly created database. The command for creating a new MySQL user takes the following form:
    1
     CREATE USER '[username]'@'[host]' IDENTIFIED BY '[password]';
    
  4. In the next example, we will create a user named customeradmin:
    1
    CREATE USER 'customeradmin'@'localhost' IDENTIFIED BY 's3cr1t';
    
  5. Now we will give customeradmin privileges to access the customer database. The command for granting privileges to a database for a specific user takes the following form:
    1
     GRANT [privilegeType] ON [databaseName].[tableName] TO '[username]'@'[host]'
    
  6. For the purposes of the following example, we will give customeradmin full access to the customer database. Issue the following command in the MySQL client:
    1
     GRANT ALL ON customer.* TO 'customeradmin'@'localhost';
    
  7. You may need to specify different access grants depending on the demands of your deployment. Consult the official documentation for MySQL’s GRANT statement. Once the destination database and MySQL user have been created, you can close the MySQL client with the following command:
    1
     quit
    
  8. You can now use the mysql command to restore your SQL file. The form of this command resembles the following:
    1
     mysql -u [username] -p[password] -h [host] [databaseName] < [filename].sql
    

In the following example, we will restore the customer database from a SQL backup file named customerBackup.sql (pay special attention to the < symbol in this command):

1
mysql -u root -p -h localhost customer < customerBackup.sql

You will be prompted for the root MySQL user’s password. Once the correct credentials are supplied, the restoration process will begin. The duration of this operation depends on your system’s load and the size of the database that you are restoring. It may complete in a few seconds, or it may take many hours.

Restoring a Single Table from Backup

Option 1: Restoring a Single Table Using the MySQL and Backups Created by mysqldump

Before beginning the restoration process, we assume that your MySQL instance already has an existing database that can receive the table you wish to restore. If your MySQL instance does not have the required database, we’ll need to create it before proceeding. First, log into your MySQL instance with the following command:

1
mysql -u root -p -h localhost

You will be prompted for the root MySQL user’s password. After you have provided the correct credentials, you must create the destination database. For the purpose of this example we will create the customer database and exit the mysql prompt by issuing the following statements:

1
2
CREATE DATABASE customer;
    quit

If you already have the required database, you can safely skip the above step. To continue with the table restoration, issue a command in the following form:

1
mysql -u [username] -p[password] -h [host] [databaseName] < [filename].sql

For the following, example, we will restore the order table into the existing customer database from an SQL backup file named customerOrderBackup.sql. Be very careful to use the < operator in the following command:

1
mysql -u root -p -h localhost customer < customerOrderBackup.sql

You will be prompted for the root MySQL user’s password. Once the correct credentials are supplied, the restoration process will begin. The duration of this operation depends on your system’s load and the size of the table that you are restoring. It may complete in a few seconds, or it may take many hours.

Option 2: Restoring a Single Table Using the MySQL Client and an INFILE Statement for Backups Created with OUTFILE

Before beginning the restoration process, we assume that your MySQL instance already has an existing database that can receive the table you wish to restore. If your MySQL instance does not have the required database, we’ll need to create it before proceeding. First, log into your MySQL instance with the following command:

1
mysql -u root -p -h localhost

You will be prompted for the root MySQL user’s password. After you have provided the correct credentials, you must create the destination database. For the purpose of this example we will create the customer database and exit the mysql prompt by issuing the following statements:

1
2
CREATE DATABASE customer;
    quit

The data backup used in this case was created using the SELECT * INTO OUTFILE 'backupFile.sql' FROM tableName command. This type of backup only retains the data itself so the table structure must be recreated. To restore a single table from within the MySQL client, you must first prepare the destination database and table. Log in to your (new) MySQL instance using the MySQL client:

1
mysql -u root -p -h localhost

You will be prompted for the root MySQL user’s password. Once the correct credentials are supplied, you must create the destination database. In this case, we will create the customer database. Issue the following statement:

1
CREATE DATABASE customer;

Remember that the semi-colons (e.g. ;) following each statement are required. Now you must create the destination table with the correct structure. The data types of the fields of the table must mirror those of the table where the backup originated. In this example, we will restore the ordertable of the customer database. There are 2 fields in the order table, custNum with data type INTand orderName with data type VARCHAR(20); your table structure will be different:

1
2
USE customer;
CREATE TABLE order (custNum INT, orderName VARCHAR(20));

Depending on your deployment, you may need to create a new MySQL user or recreate a previous user with access to the newly created database. The command for creating a new MySQL user takes the following form:

1
CREATE USER '[username]'@'[host]' IDENTIFIED BY '[password]';

In the next example, we will create a user named customeradmin:

1
CREATE USER 'customeradmin'@'localhost' IDENTIFIED BY 's3cr1t';

Now we will give customeradmin privileges to access the customer database. The command for granting privileges to a database for a specific user takes the following form:

1
GRANT [privilegeType] ON [databaseName].[tableName] TO '[username]'@'[host]'

For the purposes of the following example, we will give customeradmin full access to the customerdatabase. Issue the following command in the MySQL client:

1
GRANT ALL ON customer.* TO 'customeradmin'@'localhost';

You may need to specify different access grants depending on the demands of your deployment. Consult the official documentation for MySQL’s GRANT statement. Once the table and user have been created, we can import the backup data from the backup file using the LOAD DATA command. The syntax resembles the following:

1
LOAD DATA INFILE '[filename]' INTO TABLE [tableName];

In the following, example we will restore data from a table from a file named customerOrderBackup.sql. When MySQL client is given path and filename after INFILE, it looks in the MySQL data directory for that file. If the filename customerOrderBackup.sql was given, the path would be /var/lib/mysql/customerOrderBackup.sql. Ensure that the file you are trying to restore from exists, especially if MySQL generates File not found errors.

To import the data from the customerOrderBackup.sql file located in /var/lib/mysql/, issue the following command:

1
LOAD DATA INFILE 'customerOrderBackup.sql' INTO TABLE order;

This process can take anywhere from a few seconds to many hours depending on the size of your table. The duration of this operation depends on your system’s load and the size of the table that you are restoring. It may complete in a few seconds, or it may take many hours. After you have verified that your data was imported successfully, you can log out:

1
quit

The post How To Back Up Your MySQL Databases appeared first on Shine Servers.

How To Move MySQL Data Directory On A Separate Partition

$
0
0

Prerequisite: A free partition that will serve as a dedicated MySQL partition.

Note: These instructions assume that the partition you wish to mount is /dev/sdc1

  1. Backup all MySQL databases
    Code:
    mysqldump --opt --all-databases | gzip > /home/alldatabases.sql.gz
  2. Stop tailwatchd and the mysql (tailwatchd monitors services, so disable it to prevent it from prematurely restarting mysql)
    Code:
    /scripts/restartsrv_tailwatchd --stop
    /scripts/restartsrv_mysql --stop
  3. Backup the MySQL data directory in case something goes awry
    Code:
    mv /var/lib/mysql /var/lib/mysql.backup
  4. Create the new mount point
    Code:
    mkdir /var/lib/mysql
  5. Configure /etc/fstab so that the new partition is mounted when the server boots (adjust values as necessary)
    Code:
    echo "/dev/sdc1     /var/lib/mysql     ext3     defaults,usrquota    0 1" >> /etc/fstab
  6. Mount the new partition. The following command will mount everything in /etc/fstab:
    Code:
    mount -a
  7. Change the ownership of the mount point so that it is accessible to the user “mysql”
    Code:
    chown mysql:mysql /var/lib/mysql
  8. Ensure that the permissions of the mount point are correct
    Code:
    chmod 711 /var/lib/mysql
  9. Start mysql and tailwatchd
    Code:
    /scripts/restartsrv_mysql --start
    /scripts/restartsrv_tailwatchd --start
  10. Ensure that the MySQL data directory is mounted correctly:
    Code:
    mount |grep /var/lib/mysql
  11. You should see a line that looks like this:
    /dev/sdc1 on /var/lib/mysql type ext3 (rw,usrquota)

Source

The post How To Move MySQL Data Directory On A Separate Partition appeared first on Shine Servers.

Finding The best WordPress Theme for business

$
0
0

Untitled

“Beauty awakens the soul to act.” This resplendent phrase of Dante’ couldn’t be truer than in case of designs. Today’s internet is no longer the drab and achromatic world it was a decade ago. The internet has transformed into a vivid and colourful virtual cosmos. Especially when it comes to websites, the look and feel of the website clearly exert a strong influence on your visitors. Poor design and interface evokes mistrust and are one of the major reasons of a mistrust in a website. Leaving your website dull and boring is doing a huge disservice to your business. Choosing a wrong theme for your website can mean lower traffic, less visits and lower ranks in search engines.

A website is the digital portrayal of an organization and leave a perennial impression in the minds of your visitors. As an extension of your business, your website should reflect your business personality. Now that you have enough reasons to go for a revamp, you may think that the battle is almost won although you would soon realize that deciding to go for a revamp is the easiest part of the process. It is selecting a perfect theme that would dictate the future of your website. Even though WordPress comes with a huge selection of both free and paid themes, choosing a theme can be an overwhelming decision as you wouldn’t want your website to look like the run-of-the-mill type. Picking the right theme involves much more than just going with the latest flow, following the hippest trends or choosing the most aesthetic looking theme. Read on the tips below to see how you can pick the best WordPress theme for your website.

Define Your Needs

The first step before you can even start theme browsing is to understand what your needs are. The perfect theme would depend a lot on your business goals, the kind of business you are in, the kind of themes your competitors are using and the kind of themes your users would like? Don’t get distracted by the pretty themes and latest fads and choose a theme fitting your business instead of making your business fit to the theme.

Pick a Theme relating to Your Industry

It is understandable to want to stand out of the crowd but if a certain industry goes with a certain kind of themes, there are always reasons to theme. If you have an event planning or photography business, it is always better to stick to image-based website even if you may have the best wordsmiths in your team. Stick to a theme relating to your industry, niche, services or products.

Check for Versatility

A theme which is frozen and wouldn’t scale up with your business is not a good choice. Also opt for a theme which is responsive and serves your needs well now and in future.

Strive for simplicity

It is very easy to get carried away and choose complex themes with a lot going on. But some themes may have speed and redundant code structure issues. Go for simple and elegant themes for a timeless look.

Pick the Plugins

Ensure that whatever theme use are compatible with the kind of plugins you would need for your website.

Compatibility Issues

This is one of the most important factor when theme shopping. Ensure that the theme you are choosing is compatible with most famous browsers. Themes which are be w3 valid and cross browser compatible are a good choice.

Apart from the above pointers look for security (very important), speed (another very important factor), responsiveness, SEO friendliness before finalizing on the WordPress theme.

After you have narrowed down on a WordPress theme, the next step would be choose a hosting provider for your website. If you are on a look out for hosting services, ShineServers has got your covered. The Hosting Services offered by ShineServers are affordable, reliable and secure.

The post Finding The best WordPress Theme for business appeared first on Shine Servers.

Resetting Root Password Using Rescue Mode

$
0
0

It’s been a million dollar question for anyone who is stuck and don’t remember the root password, If you are not able to reset the password for your Linux Server then you will need to place the server into rescue mode and chroot the file system of the server and run passwd to update the root password. Sounds easy? Let me show you how 🙂

  1. Place Server into Rescue Mode or If you have no idea how to do that then ask your hosting provider to do that for you.
  2. Connect to the rescue mode server using ssh as normally you do.
  3. It is always suggested to run ‘fsck’ (File System check) every time you get. It will save you hassles of it automatically running during a reboot, causing boot time to take longer than expected.

This could be either /dev/sda1 or /dev/sdb1 depending on your setup.

I will be using /dev/sda1 in the reset of the example:

fsck -fyv /dev/sda1

This will force a file system check (f flag), automatically respond ‘yes’ to any questions prompted(y flag), and display a verbose output at the very end(v flag).

Mounting the file system:

a. Make a temporary directory:

mkdir /mnt/rescue

b. Mount to that temp directory

mount /dev/sda1 /mnt/rescue
chroot /mnt/rescue

4. We are going to use ‘chroot’. chroot allows you to set the root of the system in a temporary environment.

5. Now that we are chroot-ed into your original drive, all you have to do is run ‘passwd’ to update your root password on the original Server’s hard drive.

passwd

(This will prompt you for your new password twice, and then update the appropriate files.)

6. Exit out of chroot mode.

exit

7. Unmount your original drive

umount /mnt/rescue

8. Exit out of SSH and Exit Rescue Mode.

The post Resetting Root Password Using Rescue Mode appeared first on Shine Servers.

Compile and Install a LAMP(Linux/Apache/MySQL/PHP) Server from Source

$
0
0

Most out-of-the-box Red Hat Linux installations will have one or more of the LAMP components installed via RPM files. I personally believe in installing things like this from source, so I get the most control over what’s compiled in, what’s left out, etc. But source code installs can wreak havoc if overlaid on top of RPM installs, as the two most likely won’t share the same directories, etc.

If you have not yet installed your Linux OS, or just for future reference, do not choose to install Apache, PHP, or MySQL during the system installation. Then you can immediately proceed with the source-based install listed here.

Note: to install applications from source code, you will need a C++ compiler (gcc++) installed. This is generally taken care of, but I’ve had enough queries about it that I’ve added this note to avoid getting more! You can use your distribution’s install CDs to get the proper version of the compiler. Or, if you are using an RPM based distro, you can use a site like http://www.rpmfind.net/ to locate the correct RPM version for your system. (You will obviously not be able to use/rebuild a source RPM to get the compiler installed, as you need the compiler to build the final binary RPM!) On a Fedora system, you can do this command:

su – root
yum install gcc gcc-c++

Log in as root

Because we will be installing software to directories that “regular” users don’t have write access to, and also possibly uninstalling RPM versions of some applications, we’ll log in as root. The only steps that need root access are the actual installation steps, but by doing the configure and make steps as root, the source code will also be inaccessible to “regular” users.

If you do not have direct access (via keyboard) to the server, PLEASE use Secure Shell (SSH) to access the server and not telnet!! Whenever you use telnet (or plain FTP for that matter), you are transmitting your username, password, and all session information in “plain text”. This means that anyone who can access a machine someplace between your PC and your server can snoop your session and get your info. Use encryption wherever possible!

su – root

Remove RPM Versions of the Applications

Before we start with our source code install, we need to remove all the existing RPM files for these products. To find out what RPMs are already installed, use the RPM query command:

rpm -qa

in conjunction with grep to filter your results:

rpm -qa | grep -i apache
rpm -qa | grep -i httpd
rpm -qa | grep -i php
rpm -qa | grep -i mysql

The ‘httpd’ search is in case you have Apache2 installed via RPM.

To remove the RPMs generated by these commands, do

rpm -e filename

for each RPM you found in the query. If you have any content in your MySQL database already, the RPM removal step should not delete the database files. When you reinstall MySQL, you should be able to move all those files to your new MySQL data directory and have access to them all again.

Get the Source Code for all Applications

We want to put all our source code someplace central, so it’s not getting mixed up in someone’s home directory, etc.

cd /usr/local/src

One way application source code is distributed is in what are known as “tarballs.” The tar command is usually associated with making tape backups – tar stands for Tape ARchive. It’s also a handy way to pack up multiple files for easy distribution. Use the man tar command to learn more about how to use this very flexible tool.

At the time of updating this, the current versions of all the components we’ll use are:

MySQL – 4.1.22
Apache – 1.3.37
PHP – 4.4.6

Please note: these are the only versions of these that I have set up myself, and verified these steps against. If you use another version of any component, especially a newer version, this HOWTO may not be accurate, and I won’t be able to provide free support under those circumstances. Paid support and assistance is always available however.

wget http://www.php.net/distributions/php-4.4.6.tar.gz
wget http://apache.oregonstate.edu/httpd/apache_1.3.37.tar.gz

There may be an Apache mirror closer to you – check their mirror page for other sources. Then insert the URL you get in place of the above for the wget command.

For MySQL, go to http://www.mysql.com/ and choose an appropriate mirror to get the newest MySQL version (v4.1.22).

Unpack the Source Code

tar zxf php-4.4.6.tar.gz
tar zxf apache_1.3.37.tar.gz
tar zxf mysql-4.1.22.tar.gz

This should leave you with the following directories:

/usr/local/src/php-4.4.6
/usr/local/src/apache_1.3.37
/usr/local/src/mysql-4.1.22

Build and Install MySQL

First, we create the group and user that “owns” MySQL. For security purposes, we don’t want MySQL running as root on the system. To be able to easily identify MySQL processes in top or a ps list, we’ll make a user and group named mysql:

groupadd mysql
useradd -g mysql -c “MySQL Server” mysql

If you get any messages about the group or user already existing, that’s fine. The goal is just to make sure we have them on the system.

What the useradd command is doing is creating a user mysql in the group mysql with the “name” of MySQL Server. This way when it’s showed in various user and process watching apps, you’ll be able to tell what it is right away.

Now we’ll change to the “working” directory where the source code is, change the file ‘ownership’ for the source tree (this prevents build issues in reported in some cases where the packager’s username was included on the source and you aren’t using the exact same name to compile with!) and start building.

The configure command has many options you can specify. I have listed some fairly common ones; if you’d like to see others, do:

./configure –help | less

to see them all. Read the documentation on the MySQL website for a more detailed explanation of each option.

cd /usr/local/src/mysql-4.1.22

chown -R root.root *

make clean

./configure \
–prefix=/usr/local/mysql \
–localstatedir=/usr/local/mysql/data \
–disable-maintainer-mode \
–with-mysqld-user=mysql \
–with-unix-socket-path=/tmp/mysql.sock \
–without-comment \
–without-debug \
–without-bench

18-Jul-2005: If you are installing MySQL 4.0.x on Fedora Core 4, there is a problem with LinuxThreads that prevents MySQL from compiling properly. Installing on Fedora Core 3 works fine though. Thanks to Kevin Spencer for bringing this to my attention. There is a workaround listed at http://bugs.mysql.com/bug.php?id=9497. Thanks to Collin Campbell for that link. Another solution can be found at http://bugs.mysql.com/bug.php?id=2173. Thanks to Kaloyan Raev for that one.

Now comes the long part, where the source code is actually compiled and then installed. Plan to get some coffee or take a break while this step runs. It could be 10-15 minutes or more, depending on your system’s free memory, load average, etc.

make && make install

Configure MySQL

MySQL is “installed” but we have a few more steps until it’s actually “done” and ready to start. First run the script which actually sets up MySQL’s internal database (named, oddly enough, mysql).

./scripts/mysql_install_db

Then we want to set the proper ownership for the MySQL directories and data files, so that only MySQL (and root) can do anything with them.

chown -R root:mysql /usr/local/mysql
chown -R mysql:mysql /usr/local/mysql/data

Copy the default configuration file for the expected size of the database (small, medium, large, huge)

cp support-files/my-medium.cnf /etc/my.cnf
chown root:sys /etc/my.cnf
chmod 644 /etc/my.cnf

If you get an error message about the data directory not existing, etc., something went wrong in the mysql_install_db step above. Go back and review that; make sure you didn’t get some sort of error message when you ran it, etc.

Now we have to tell the system where to find some of the dynamic libraries that MySQL will need to run. We use dynamic libraries instead of static to keep the memory usage of the MySQL program itself to a minimum.

echo “/usr/local/mysql/lib/mysql” >> /etc/ld.so.conf
ldconfig

Now create a startup script, which enables MySQL auto-start each time your server is restarted.

cp ./support-files/mysql.server /etc/rc.d/init.d/mysql
chmod +x /etc/rc.d/init.d/mysql
/sbin/chkconfig –level 3 mysql on

Then set up symlinks for all the MySQL binaries, so they can be run from anyplace without having to include/specify long paths, etc.

cd /usr/local/mysql/bin
for file in *; do ln -s /usr/local/mysql/bin/$file /usr/bin/$file; done

MySQL Security Issues

First, we will assume that only applications on the same server will be allowed to access the database (i.e., not a program running on a physically separate server). So we’ll tell MySQL not to even listen on port 3306 for TCP connections like it does by default.

Edit /etc/my.cnf and uncomment the

skip-networking

line (delete the leading #).

For more security info, check out this MySQL security tutorial.

Start MySQL

First, test the linked copy of the startup script in the normal server runlevel start directory, to make sure the symlink was properly set up:

cd ~
/etc/rc.d/rc3.d/S90mysql start

If you ever want to manually start or stop the MySQL server, use these commands:

/etc/rc.d/init.d/mysql start
/etc/rc.d/init.d/mysql stop

Let’s “test” the install to see what version of MySQL we’re running now:

mysqladmin version

It should answer back with the version we’ve just installed…

Now we’ll set a password for the MySQL root user (note that the MySQL root user is not the same as the system root user, and definitely should not have the same password as the system root user!).

mysqladmin -u root password new-password

(obviously, insert your own password in the above command instead of the “new-password” string!)

You’re done! MySQL is now installed and running on your server. It is highly recommended that you read about MySQL security and lock down your server as much as possible. The MySQL site has info at http://www.mysql.com/doc/en/Privilege_system.html.

Test MySQL

To run a quick test, use the command line program mysql:

mysql -u root -p

and enter your new root user password when prompted. You will then see the MySQL prompt:

mysql>

First, while we’re in here, we’ll take care of another security issue and delete the sample database test and all default accounts except for the MySQL root user. Enter each of these lines at the mysql> prompt:

drop database test;
use mysql;
delete from db;
delete from user where not (host=”localhost” and user=”root”);
flush privileges;

As another security measure, I like to change the MySQL administrator account name from root to something harder to guess. This will make it that much harder for someone who gains shell access to your server to take control of MySQL.

MAKE SURE YOU REMEMBER THIS NEW NAME, AND USE IT WHEREVER
YOU SEE “root” IN OTHER DIRECTIONS, WEBSITES, ETC.

ONCE YOU DO THIS STEP, THE USERNAME “root” WILL CEASE TO
EXIST IN YOUR MYSQL CONFIGURATION!

update user set user=”sqladmin” where user=”root”;
flush privileges;

Now, on with the “standard” testing… First, create a new database:

create database foo;

You should see the result:

Query OK, 1 row affected (0.04 sec)

mysql>

Delete the database:

drop database foo;

You should see the result:

Query OK, 0 rows affected (0.06 sec)

mysql>

To exit from mysql enter \q:

\q

Build and Install Apache (with DSO support)

The advantage to building Apache with support for dynamically loaded modules is that in the future, you can add functionality to your webserver by just compiling and installing modules, and restarting the webserver. If the features were compiled into Apache, you would need to rebuild Apache from scratch every time you wanted to add or update a module (like PHP). Your Apache binary is also smaller, which means more efficient memory usage.

The downside to dynamic modules is a slight performance hit compared to having the modules compiled in.

cd /usr/local/src/apache_1.3.37

make clean

./configure \
–prefix=/usr/local/apache \
–enable-shared=max \
–enable-module=rewrite \
–enable-module=so

make && make install

Build and Install PHP

This section has only been tested with PHP v4.x. If you are trying to build PHP 5.x, I do not have experience with this yet, and do not provide free support for you to get it working. Please note that there are many options which can be selected when compiling PHP. Some will have library dependencies, meaning certain software may need to be already installed on your server before you start building PHP. You can use the command

./configure –help | less

once you change into the PHP source directory. This will show you a list of all possible configuration switches. For more information on what these switches are, please check the PHP website documentation.

cd /usr/local/src/php-4.4.6

./configure \
–with-apxs=/usr/local/apache/bin/apxs \
–disable-debug \
–enable-ftp \
–enable-inline-optimization \
–enable-magic-quotes \
–enable-mbstring \
–enable-mm=shared \
–enable-safe-mode \
–enable-track-vars \
–enable-trans-sid \
–enable-wddx=shared \
–enable-xml \
–with-dom \
–with-gd \
–with-gettext \
–with-mysql=/usr/local/mysql \
–with-regex=system \
–with-xml \
–with-zlib-dir=/usr/lib

make && make install

cp php.ini-dist /usr/local/lib/php.ini

I like to keep my config files all together in /etc. I set up a symbolic link like this:

ln -s /usr/local/lib/php.ini /etc/php.ini

Then I can just open /etc/php.ini in my editor to make changes.

Recommended reading on securing your PHP installation is this article at SecurityFocus.com.

Edit the Apache Configuration File (httpd.conf)

I like to keep all my configuration files together in /etc, so I set up a symbolic link from the actual location to /etc:

ln -s /usr/local/apache/conf/httpd.conf /etc/httpd.conf

Now open /etc/httpd.conf in your favorite text editor, and set all the basic Apache options in accordance with the official Apache instructions (beyond the scope of this HOWTO).

Also recommended is the article on securing Apache.

To ensure your PHP files are properly interpreted, and not just downloaded as text files, remove the # at the beginning of the lines which read:

#AddType application/x-httpd-php .php
#AddType application/x-httpd-php-source .phps
If the AddType lines above don’t exist, manually enter them (without the leading # of course) after the line

AddType application/x-tar .tgz

or anyplace within the <IfModule mod_mime.c> section of httpd.conf.

If you wish to use other/additional extensions/filetypes for your PHP scripts instead of just .php, add them to the AddType directive:

AddType application/x-httpd-php .php .foo
AddType application/x-httpd-php-source .phps .phtmls

An example: if you wanted every single HTML page to be parsed and processed like a PHP script, just add .htm and .html:

AddType application/x-httpd-php .php .htm .html

There will be a bit of a performance loss if every single HTML page is being checked for PHP code even if it doesn’t contain any. But if you want to use PHP but be “stealthy” about it, you can use this trick.

Add index.php to the list of valid Directory Index files so that your “default page” in a directory can be named index.php.

<IfModule mod_dir.c>
DirectoryIndex index.php index.htm index.html
</IfModule>

You can add anything else you want here too. If you want foobar.baz to be a valid directory index page, just add the .baz filetype to the AddType line, and add foobar.baz to the DirectoryIndex line.

Start Apache

We want to set Apache up with a normal start/stop script in /etc/rc.d/init.d so it can be auto-started and controlled like other system daemons. Set up a symbolic link for the apachectl utility (installed automatically as part of Apache):

ln -s /usr/local/apache/bin/apachectl /etc/rc.d/init.d/apache

Then set up auto-start for runlevel 3 (where the server will go by default):

ln -s /etc/rc.d/init.d/apache /etc/rc.d/rc3.d/S90apache

Then start the daemon:

/etc/rc.d/init.d/apache start

You can check that it’s running properly by doing:

ps -ef

and look for the httpd processes.

The post Compile and Install a LAMP(Linux/Apache/MySQL/PHP) Server from Source appeared first on Shine Servers.

Viewing all 110 articles
Browse latest View live