Hello my dears!
Asked a question of simplifying the bundle
Current config
Balancer(nginx) ->10 servers with nginx + php5-fpm(socket).
Planned
Balancer(nginx) ->10 servers with php5-fpm(port)

In general, the question! Which solution is still better? Or maybe you have more interesting solutions.

P.S.I understand that sockets work faster, but here we still run into the network.

3 Answers 3

It will be equally within the measurement error.Nginx on backends gives a minimum overhead, but this option is safer, since it can basically work without a balancer, if it fails, you can configure a new one or turn it on faster.In general, you want to change the awl to soap .If you want to change something, try raising it on the HAProxy balancer, in some cases it worked better for me than nginx, in some nginx it works better.
  • Perhaps the question is off-topic, there is a server response waiting 0.3 sec.No way I can cut it.Can you advise something? – Stupid11 Oct 25 '14 at 17:19
  • Waiting from front end to backend? Fairvol? Routing? Geographical remoteness of servers? Here you can guess a lot and it is useless, options from not correctly set options in sysctl to bad weather on Mars.What OS, project architecture? What measure expectation? – Kosher Janitor Oct 25 '14 at 17:24
  • www.etagi.com - where
    the actual geography here is nothing to do with, even from Germany, even from Moscow 0.3 seconds waiting for a blank page(specially checked)
    OC - ​​ubuntu, debian
    – Stupid11 Oct 25 '14 at 17:28
  • Stupid11: I mean the geography between the balancer and the server, it happens in the neighboring machines of the same data center that such a delay occurs if it goes through a dozen switches.Tried to contact servers without a balancer, is there a delay? It is necessary first to identify exactly where the delay arises, then try to do something.Yes, and for the time being checked, dampen the system of protection against attacks, if it is, the firewall becomes fatal and introduces a lot of delays if it has been configured for a long time and then hastily changed. – Kosher Janitor Oct 25 '14 at 17:34
  • It turned out that the balancer and most of the servers are in the same data center and are connected directly through the router, but even remote nodes respond in the same 0.3.seconds...whatever one may say))) – Stupid11 Oct 25 '14 at 17:40
  • Stupid11: That means the problem is not in the balancer.Is the page exactly empty static, not generated by php? No access to the database on the test page? Any varnish type accelerators not running? – Kosher Janitor Oct 25 '14 at 17:45
  • No dynamics...even the usual 1x1px picture is waiting for the same 0.3.Varnish is not used – Stupid11 Oct 25 '14 at 17:48
  • Stupid11: What file system is used? How do data sync between nodes? Here you can guess for a long time, but your site and so does not quickly open, as for me the parrots you are not overclocking.And about the lack of load - can I create? How much and how much?
    I can run on a gigabit channel
    while true;do
    ab -n 2000 -c 1000 www.etagi.com
    done

    if you want of course, I do not mind the traffic.
    – Kosher Janitor Oct 25 '14 at 17:52
Yes, actually on small configurations it’s all the same, well, you lose or acquire a couple of three percent of the overhead, in the section of 1000 servers it is 20-30 servers, which is significant, in your case it does not matter
  • Actually, sn depends on tenths of a second)) so if it speeds up the service by at least 0.1 - 0.2 seconds, it will be great – Stupid11 Oct 25 '14 at 17:24
  • www.etagi.com
    And it is difficult to verify...there is no required load(((and in the case of degradation, sn will decrease))
    I'll go to highload, maybe they will offer more options)
    – Stupid11 Oct 25 '14 at 17:34
  • I don’t see any reason to win 1-3 percent here, well, let's say you can serve not 1000 kilos and 1030, that is, before you fell by 1000 online, and now you will fall on 1030, in my opinion this is the same thing.
    Well, create a load case is not tricky, or you just figure it out without tests.
    – Fantastic Finch Oct 25 '14 at 17:36
  • Yes, we have everything, and test nodes and testers...we will try – Stupid11 Oct 25 '14 at 17:38
  • given the fact that your site is already rather slowly opening up/141025_N7_HC9
    it seems stupid for me to win a couple of hundredths of a second, for the sake of that I would not even begin to change the scheme, I need to dig the other way.
    – Fantastic Finch Oct 25 '14 at 17:42
  • funny numbers, we see with pr-cy.ru 0.8 sec – Stupid11 Oct 25 '14 at 17:45
  • Strange, say 10 servers, the site looked like a little data.Indeed a lot of time. – Creepy15 Oct 25 '14 at 18:23
  • Creepy15: not all of our developers are equally qualified – Stupid11 Oct 25 '14 at 18:39
  • Creepy15: I'm more on hardware and OC – Stupid11 Oct 25 '14 at 18:40
  • I see from my chrome 1.3 and in some places 2.0 seconds – Fantastic Finch Oct 25 '14 at 21:13
Sockets are faster than the network, even on localhost, especially on large numbers.
nginx should be faster than php-fpm by definition, otherwise why are there php at all and you can put everything in static.
More effective than the nginx, no one will give statics.
With any optimization, you must first ask the question:"Where is the brake, sobsno?".
Therefore acceleration is more php processes + caching + profiling the code of the site itself + profiling work with the database.