九色国产,午夜在线视频,新黄色网址,九九色综合,天天做夜夜做久久做狠狠,天天躁夜夜躁狠狠躁2021a,久久不卡一区二区三区

打開APP
userphoto
未登錄

開通VIP,暢享免費(fèi)電子書等14項(xiàng)超值服

開通VIP
Load-balancing Tomcat with Apache


February 2008

Discussion


Introduction

Tomcat is a popular application server used to host webapplications. Apache is a popular web server which provides serviceslike https encryption and decryption, URL rewriting etc. Apache canalso be used a load balancer to balance load between several Tomcatapplication servers.

This article briefly discusses some alternatives for load balancingan application server. It discusses implementation details for settingup load balancing with Apache using ‘mod_proxy’ module. It also looksat some of the features provided by apache such as ‘server affinity’and safe removal of node.

Downloadable web application is provided which can be used to testload balancing with Apache. Jmeter script is also provided for loadtesting Apache.

Setting up Apache and load balancing is the role of the SystemAdministrator. Unless a java developer works in a small team or issetting up a test environment, he wont get involved in setting up loadbalancing. However, it is good to understand the principles behind loadbalancing. The knowledge might help a developer to fix issues in liveenvironment.

This article discusses load balancing in the context of a webapplication. The application is accessed using https, requires a userto login and some user specific information is stored in session.

Background knowledge

It is assumed that the reader is familiar with the following concepts and technologies:

  • Apache (Version 2.2)
  • Tomcat (version 5.5.23)
  • Jmeter for load testing web applications
Some important terms

Load balancing – user requests are processed by more than one server with all servers sharing the load ‘equally’
Server affinity (sticky session)– With server affinity, multiple requests from a user are processed bythe same server. This is required for non clustered servers as the usersession data is held on one server only and all requests from that userhave to go to the server which has the session data for the user.
Transparent failover– User is not aware of a server crash. Transparent failover can berequest level or session level. Transparent failover can be achieved byclustering application servers. With clustering, all the servers arethe same and so the loss of a server does not interrupt the service.With load balancing alone, the user has to login again when servercrashes.
Server Cluster – A group of serverswhich appear to be a single server to the user. Loss of a server istransparent to the user. User data is held on all servers in thecluster group. Any server can process any user request and loss of anyserver does not lead to interruption of service. The user does not haveto login after a server failure. Since the user session data isreplicated over the network to more than one server, there is aperformance overhead. So clustering should be avoided unlesstransparent failover is required.
Scalability – measure of the ability of a system to handle increasing load without reducing response time
Response time – time taken to process a user request
Real workers– Term used by apache to refer to servers which are components of aload balanced system. The real workers do the actual work and the realworker is usually a remote host.
Virtual worker – In Apache, the load balancer is referred to as the virtual worker which delegates processing to real workers.

Load balancing algorithms

  • Round robin – requests are Reduced likelihood of version conflicts
  • Weighted round robin – servers of different capacity are assigned requests in proportion to their capacity (as defined by a load factor). Apache as 2 versions of this:
    • Request counting algorithm – requests are delegated in round robin manner irrespective of the nature of the request
    • Weighted traffic counting algorithm – Apache delegates traffic to real worker based on the number of bytes in the request

Compare Load balancing with Clustering

Both Load balancing and clustering aim to improve scalability byspreading load over more than one server. They both aim to providehorizontal scalability.

Load balancing

Clustering

User has to login after server crash

User does not have to login after server crash. So failover is transparent to the user

Load balancing is done by the web server or using DNS or using hardware load balancer or using Tomcat balancer web application

Clustering capability is provided by the application server. Clustering also requires load balancing

The application servers (e.g. Tomcat) do not communicate with each other

The application servers (e.g. Tomcat) communicate with each other.

There is minimal effect on response time when moving up from a single server to load balanced servers under the same load

Response times could deteriorate when moving to a clustered system from a single server as the session data is now replicated over the network to other servers.

 

More the session data, more is the deterioration in performance compared to a single server.

 

Response time also depends on number of nodes in the cluster. More the number of nodes, more is the deterioration in performance as data is replicated using TCP to every single node in cluster. This can be reduced by using UDP to replicate session data. With UDP, session data could be lost during session replication. So a user might have to login again if a server crashes

Load balancing can be used independently of clustering

Clustering also requires load balancing but makes ‘server affinity’ redundant. It provides ‘transparent failover’ capability over load balancing at cost of decreased response time and more complex configuration

Usually no changes are required to move an application from a single server to a load balanced set of servers

Application must meet certain criteria for it to work in a clustered environment. The user variables stored in the session must be ‘serializable’. To get good response time, only small objects must be stored in the session.

Choices for implementing Load balancing

Hardware based load balancing

  • Pros
    • Fast
  • Cons
    • Expensive
    • Proprietary
    • Less flexible

Software based load balancing (e.g. Apache or Tomcat balancer)

  • Pros
    • Open source and free to implement with Apache and Tomcat balancer application
    • Easy to configure
    • More flexib
  • Cons
    • Lower performance compared to hardware based solution

Alternatives for software based load balancing

  • Apache ‘mod_proxy’ module or ‘mod_jk’ module. However ‘mod_proxy’ is easier to configure and newer than ‘mod-jk’ module.
  • Using Tomcat balancer application
  • Linux virtual server
  • Using DNS for load balancing

The rest of this article discusses load balancing using Apache ‘mod_proxy’ module.

Load balancing with server affinity

A simple load balanced setup which does not provide ‘serveraffinity’ is not suitable for stateful web applications. In statefulweb applications, user state (session data) is held on one server. Allfurther requests from that user must be processed by the same server.Hence server affinity (sticky sessions) is necessary for stateful webapplications which don’t use clustering. The minimum load balancedsetup is with 2 application servers and one web server (load balancer).If https decryption is required, then the same Apache server can alsobe used for https decryption.In a load balanced system with serveraffinity, all requests from user 1 go to Tomcat instance 1. This isshown below:

Apache implements server affinity by rewriting the ‘jsessionid’ sentby Tomcat to the browser. The Tomcat worker name is added to the end of‘jsessionid’ before the ‘jsessionid’ is sent to the browser. In thenext request from the same user, the Tomcat worker name is read fromthe ‘jsessionid’ and the request delegated to this Tomcat realworker.The ‘jsessionid’ stored as a cookie in the browser has thetomcat worker name as shown in screenshot below:

The Apache configuration (in httpd.conf) to setup Apache as a load balancer for 2 application servers is shown below:

ProxyPass /apache-load-balancing-1.0 balancer://mycluster stickysession=JSESSIONID<Proxy balancer://mycluster>BalancerMember ajp://tomcat1:8009/apache-load-balancing-1.0 route=tomcat1 loadfactor=50BalancerMember ajp://tomcat2:8009/apache-load-balancing-1.0 route=tomcat2 loadfactor=50</Proxy>
The above setup requires ‘mod_proxy’ module to be loaded.The first linesets up a reverse proxy for request ‘/apache-load-balancing-1.0’ anddelegates all requests to load balancer (virtual worker) with name‘mycluster’. Sticky sessions are enabled and implemented using cookie‘JSESSIONID’.The ‘Proxy’ section lists all the servers (real workers)which the load balancer can use. For each participant in loadbalancing, we define the url (using http, ftp or ajp protocol) and giveit a name which matches the name of the Tomcat engine defined in theTomcat ‘server.xml’. The ‘loadfactor’ can be a number between 1 and100. Set it to 50 so it can be increased or decreased dynamically lateron.The tomcat ‘server.xml’ in ‘TOMCAT_FOLDER/conf’ folder should havethis configuration:
....<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat2">...

With 2 browser sessions, the Tomcat server console and the serverresponse show that all requests from the same user are processed by thesame server.

Load balancing manager and safe removal of server node

Apache load balancing manager Apache includes a‘balancer manager’ which can be used to check the status of the serversused for load balancing and to prepare for safe removal of a node. Toenable balancer manager, add this section to Apache ‘httpd.conf’ file:
<Location /balancer-manager>SetHandler balancer-manager</Location>

Balancer manager requires ‘mod_proxy’ and ‘mod_proxy_balancer’ modules to be loaded.

Balancer manager will now be accessible at the url ‘/balancer-manager’.

Screenshot of this is shown below:

Safe removal of a server node When an applicationserver (real worker) has to be taken offline for maintenance, set avery low load factor for the server due to be taken offline. Few newusers will be served by that server. All existing users ‘sticking’ tothis real worker will continue to be processed by this real worker.This will help reduce the number of users who will have to login againwhen the real worker is taken offline.After some time, the real workeris disabled and taken offline. Once it is ready to come online, theworker is enabled again in balancer manager.

Tests

About the attached web application and JMeter scriptTheattached web application contains a single servlet and 3 JSP pages. Theservlet has hard coded usernames and passwords for 10 users and is usedto authenticate the user and store the username in the session. Theservlet also prints the username and server name of server. The loginJSP (index.jsp) is used to login and the second JSP (greeting.jsp) isused to print a greeting after the user logs in. This JSP also printsthe username and server name in the response. The third JSP(user_details.jsp) is used to print user details and the server nameand IP address of the server name used to process the request. This JSPalso prints the username and server name of the server to the console.The name of the server is setup as a context parameter in the web.xml.Change it before deploying it so that the 2 web applications havedifferent names to make it easier to identify them
	<context-param><param-name>serverName<param-value>Tomcat instance 1</context-param>

The attached JMeter script sets up 10 threads with each thread beingused to login 1 user and request user details. Login is done once perthread/user and all subsequent requests from the user are requests toget user details.Manual test with 2 browser windows to show server affinityOpen2 browser windows and login using different usernames. The serverconsoles in Tomcat will show that requests from one user will always goto a particular Tomcat instance demonstrating server affinity. Seescreenshot below:

Load testing with JMeterThe attached JMeter scriptis used to simulate a load of 10 users. The attached JMeter script hasbeen setup to record the response. Comparing the response from theserver and the server console, we can see that the load of 10 users isshared equally between 2 servers and all requests from one user areprocessed by the same server.

Sudden loss of one server and then restoring the serverThiscan be simulated with either shutting down tomcat to simulate a servercrash. This is best simulated using JMeter as a client to simulate aload of 10 users continuously requesting pages from the servers.Run theattached JMeter script and once the load test is running, take oneserver down. Any subsequent requests to the offline server will beredirected to the second server. When one server goes down, the usersession data is lost and so all the users who have ‘affinity’ to thatserver will have to login again.

Reducing single point of failure

Load balancer can become the single point of failure. This can bereduced by using round robin DNS to delegate user requests to more thanone load balancer. The load balancer delegates requests to more thanone application server. In this scenario, if the load balancer and/orthe application server goes down, the other load balancer andapplication servers can still provide some level of service. This isillustrated in the diagram below

:

Conclusion

Apache can be used to load balance Tomcat servers. This setupprovides other useful features such as ‘Server affinity’ and saferemoval of nodes for scheduled maintenance. Load balancing isrecommended if transparent failover is not required. It is easy tosetup load balancing and ‘server affinity’ with Apache.JMeter can beused to load test the configuration and to test the behaviour in caseof a server crash.The load balancer can become the single point offailure. This can be reduced by using 2 load balancer and using roundrobin DNS to delegate request to more than one server.

Source Files

web application.zip
apache load balance load test script.jmx

Biography

Avneet Mangat 6 years experience in Java/J2EE. Currently working as Lead developer at Active Health Partners ( www.ahp.co.uk). Bachelors degree in Software Engineering, Sun Certified Webdeveloper and Java programmer, Adobe certified Flash Designer andPrince2 certified (foundation). Lead developer of open source toolDBBrowser, please see http://databasebrowser.sourceforge.net/ Outside interests include photography and travelling. Please contact me at avneet.mangat@ahp.co.uk or avneet.mangat@gmail.com for more information.

本站僅提供存儲(chǔ)服務(wù),所有內(nèi)容均由用戶發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請(qǐng)點(diǎn)擊舉報(bào)
打開APP,閱讀全文并永久保存 查看更多類似文章
猜你喜歡
類似文章
Load Balancing Web Applications
Apache與Tomcat的區(qū)別 ,幾種長(zhǎng)見得web/應(yīng)用服務(wù)器 - limingbupt...
haproxy_keepalived安裝配置
Apache+Tomcat關(guān)于Session Sticky的負(fù)載均衡
用mod
今日頭條
更多類似文章 >>
生活服務(wù)
熱點(diǎn)新聞
分享 收藏 導(dǎo)長(zhǎng)圖 關(guān)注 下載文章
綁定賬號(hào)成功
后續(xù)可登錄賬號(hào)暢享VIP特權(quán)!
如果VIP功能使用有故障,
可點(diǎn)擊這里聯(lián)系客服!

聯(lián)系客服