Масштабирование веб-приложения через многочисленные услуги

Если Вы хотите создать пользователя в сценарии оболочки, необходимо использовать useradd-p, который берет зашифрованный вид пароля. Тем путем Вы не имеете, имеют пароль в открытом тексте.

Обратите внимание, что на debian, пользователи обычно добавляются с adduser, который имеет различные биты политики, настроенной в/etc/adduser.conf. useradd, "необработанная" команда, таким образом, необходимо заботиться для получения состава группы и homedir полномочий, поскольку Вы ожидаете их.

1
задан 12 February 2014 в 00:16
2 ответа

Your question is really broad, so I just mention a few aspects:

  • local socket I/O is faster than TCP, but for most applications this should be miniscule compared to all other parts of your turnaround time (load balancer, PHP processing, DB query processing, ...)

  • split systems allow for better caching, e.g. the DB server can keep more indices in RAM.

  • possibly a scalability point: split systems are easier to configure, e.g. to deploy a new software version or update PHP you could just add a new application server, test it, and finally remove the old one.

  • to investigate your problems: check how many DB connections are opened for every web request. One explanation for your measurements would be an application that uses many SQL queries without persistent connections, so a new TCP connection is opened for every DB access.

2
ответ дан 3 December 2019 в 18:47

I don't know what you're doing to get 8-10 second load times (assuming you define "load time" as "time between the HTTP request arrives and the page is built and send to the browser").

You shouldn't be able to get your CPUs to 100% usage with a web server and a database, and even if you manage this somehow, putting web server and database on one server wouldn't help.

Also, any kind of overload on the DB server wouldn't be alleviated by moving both servers on the same hardware.

So the problem almost has to have something to do with

  • very many very small SQL statements that are sent to the DB individually, so even the small latency on a local network accumulates (imagine you have 10000 SQL statements per page, and a network latency of 0.1 msec. This will result in your 10 second load time).
  • huge blobs stored in the database that need to get to the web server through the sql connection, which is typically slower than a protocol designed for file transfer, especially over the network
  • the network connection between your hosts is artificially limited in some way

Maybe it's something else that i can't imagine at the moment, because it's very uncommon for a typical web application to become slower when you distribute it over more CPUs, as long as those CPUs have a fast network connection between them.

As long as you don't find out what causes/caused the trouble on separate hosts, you might have the same problem again, or you might not.

I had the first kind of issue last year - a customer of mine contracted a 3rd party to develop some software for them. A typical operation on a demo laptop took about 4 hours to complete. When my customer moved the software to the intended production environment (BIG application server, BIG database cluster), the same thing took a little over 16 hours. Sifting through the logs and doing a network trace, we found out that the application had did about 15.000 selects per seconds on the dev system, and the 0.3 ms latency between application server and database limited this to slightly over 3000 selects per seconds. The devs were told to change the way they accessed the database (do a join on 2 tables instead of a select on one, then a single-row-select on each of the results), which resulted in the whole operation take less then 30 minutes.

The point is, the kind of trouble you're having is unusual, might have to do with your software behaving in an unusual way, and you really should investigate what's going on here and why the 2-machine-setup was so much slower.

Splitting into 2 machines should normally improve performance, because you have more CPUs to do the job. It also increases maintainability. Your database might need a special kernel parameter or patch level to work well; your web server might have conflicting requirements. And, whenever you do an upgrade, it's so much easier to be able to upgrade one of the two systems without touching the other.

1
ответ дан 3 December 2019 в 18:47

Теги

Похожие вопросы