I happened to check the production box configuration and I noticed that there are two mpm_worker_module configurations. I would like to have only one reconciled configuration.
<IfModule mpm_worker_module>
ThreadLimit 150
StartServers 2
MaxClients 400
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 150
MaxRequestsPerChild 0
</IfModule>
<IfModule mpm_event_module>
StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadLimit 64
ThreadsPerChild 25
MaxRequestsPerChild 0
</IfModule>
Both of these configuration are different. Apache doesn't give any error If I run this command 'apache2ctl configtest`. Looks like it is valid to have two configurations for same module. But I am pretty much sure that it takes only one configuration into consideration.
I want to know what is the current configuration apache is using. I tried to run 'apache2ctl status' to status of the apache I got the following error.
'www-browser -dump http://localhost:80/server-status' failed.
Maybe you need to install a package providing www-browser or you
need to adjust the APACHE_LYNX variable in /etc/apache2/envvars
Forbidden
You don't have permission to access /server-status on this server.
I would like to know if there is any command to get the current mpm_worker_module configuration?
Is there any command which can tell me MaxClients value?
Related
Trying to find a decent .com domain name so I downloaded complete list of .com domains from Verisign with the aim of running some SQL queries against it. One key goal is to run a query that checks dictionary sized list of English words to see if any don't have a.com domain. I don't use online service in part because I haven't found a service that gives me this sort of fine tune query control but it's also because I'm curious how to do this.
My first step was to import Verisign's com.zone file (a text file) into local developer version of SQL Server using built-in import flat file wizard. It created a column I named RawData (datatype nvarchar(450), no nulls) in a table I named Com. It has ~352 million records. The records need some cleanup (e.g. don't need nameserver details, and some records don't seem parsed the same as others) but the domain names themselves seem to have been imported successfully.
I also created another table (~372K records, nvarchar(450), no nulls) named Words which a column named Word that's a listing of most English words (e.g. the, internet, was, made, for, cat, videos, etc.. no definitions, just one word per record).
An immediate hurdle I've run into though is performance. Even a basic query to check availability of a single domain name is slow. When I run
SELECT *
FROM Com
WHERE RawData = '%insert-some-domain-name-here%'
the execution time is approximately 4 minutes (using laptop with i9-9880h, 32GB RAM, 2TB NVMe SSD).
Seeing as I'd prefer not to die of old age before any theoretical dictionary sized query ended, any suggestions on how to write the query and/or database alterations to get me to the end goal of reasonably fast search that generates a list of English words that don't have domain names.
Trying to find a decent .com domain name so I downloaded complete list
of .com domains from Verisign with the aim of running some SQL queries
against it. One key goal is to run a query that checks dictionary
sized list of English words to see if any don't have a.com domain.
Forget about the idea. The pool of available domain names has been mined to death a long time ago. You are not the first to try the idea. The names that are still unregistered are obscure dictionary words that are not really usable and have no commercial use.
There are interesting things you can do, but finding good names 'overlooked' by others is not among them. But you will see for yourself.
Regarding the parsing of the zone file: here is what today's .com zone file looks like:
; The use of the Data contained in Verisign Inc.'s aggregated
; .com, and .net top-level domain zone files (including the checksum
; files) is subject to the restrictions described in the access Agreement
; with Verisign Inc.
$ORIGIN COM.
$TTL 900
# IN SOA a.gtld-servers.net. nstld.verisign-grs.com. (
1587225735 ;serial
1800 ;refresh every 30 min
900 ;retry every 15 min
604800 ;expire after a week
86400 ;minimum of a day
)
$TTL 172800
NS A.GTLD-SERVERS.NET.
NS G.GTLD-SERVERS.NET.
NS H.GTLD-SERVERS.NET.
NS C.GTLD-SERVERS.NET.
NS I.GTLD-SERVERS.NET.
NS B.GTLD-SERVERS.NET.
NS D.GTLD-SERVERS.NET.
NS L.GTLD-SERVERS.NET.
NS F.GTLD-SERVERS.NET.
NS J.GTLD-SERVERS.NET.
NS K.GTLD-SERVERS.NET.
NS E.GTLD-SERVERS.NET.
NS M.GTLD-SERVERS.NET.
COM. 86400 DNSKEY 257 3 8 AQPDzldNmMvZFX4NcNJ0uEnKDg7tmv/F3MyQR0lpBmVcNcsIszxNFxsBfKNW9JYCYqpik8366LE7VbIcNRzfp2h9OO8HRl+H+E08zauK8k7evWEmu/6od+2boggPoiEfGNyvNPaSI7FOIroDsnw/taggzHRX1Z7SOiOiPWPNIwSUyWOZ79VmcQ1GLkC6NlYvG3HwYmynQv6oFwGv/KELSw7ZSdrbTQ0HXvZbqMUI7BaMskmvgm1G7oKZ1YiF7O9ioVNc0+7ASbqmZN7Z98EGU/Qh2K/BgUe8Hs0XVcdPKrtyYnoQHd2ynKPcMMlTEih2/2HDHjRPJ2aywIpKNnv4oPo/
COM. 86400 DNSKEY 256 3 8 AwEAAbbFc1fjkBCSycht7ah9eeRaltnLDK2sVyoxkjC6zBzm/5SGgfDG/H6XEupT7ctgCvnqexainTIfa8nnBYCOtAec7Gd1vb6E/3SXkgiDaMUJXmdt8E7obtVZqjFlN2QNnTljfMiECn16rZXlvXIi255T1wFkWtp5+LUCiufsLTeKc9xbQw7y0ucsR+GKz4yEStbYi98fnB5nOzzWhRUclf0=
COM. 86400 DNSKEY 256 3 8 AwEAAcpiOic4s641IPlBcMlBWA0FFomUWuKDWN5CzId/la4aA69RFpakRxPSZM8fegOQ+nYDrUY6UZkQRsowPr18b+MqyvHBUaT6CJUBkdRwlVcD/ikpcjvfGEiH5ttpDdZdS/YKZLBedh/uMCDLNS0baJ+nfkmMZGkYGgnK9K8peU9unWbwAOrJlrK60flM84EUolIIYD6s9g/FfyVB0tE86fE=
COM. 86400 NSEC3PARAM 1 0 0 -
COM. 900 RRSIG SOA 8 1 900 20200425160215 20200418145215 39844 COM. ItE0mu9Hb2meliHlot2/6f0cMvCJThPps/BxbyRkDDYesfLBVXqtIRHiDN+wlf7HS+lxFtLHIUzT0GAPf2y5cA0s3pUdBxyRft0fC76GEJq7g0Tcpifdxft4T/6XTv77rP8pFE7aSp+SMDtUMRFIGnGTGBo7WRhjIx1G0peGXMr13xRg4Pa9kigGtjSRi3SWyNT9x1IjVVVJtsFzP9sELQ==
COM. RRSIG NS 8 1 172800 20200423045013 20200416034013 39844 COM. D9BeOQ8drx8LiXXBOk6KlxKacpno/tPujwOAPd482Kj+yAQkFxVVL1bqU03WA7c12W/mkLxk665OQDfOOoirqMePDuamvQCguaSFVKVm5no42JsxoitzwOo+g0kwm9u2F/xGO9ybPfcEQ/nrH9de/RluVSVc0MPsCMja2sCuohEMSYApMjFs4XcXsED0lFTzllIESW7JvK8xb8RFId0TOw==
COM. 86400 RRSIG NSEC3PARAM 8 1 86400 20200423045013 20200416034013 39844 COM. l37dFS1JFXDg9gr1oGACX2rI/iegsIX3RlAEpshuIsT4isZ0FAw0pkAVJQvyqd5a3IOO1TjczWN1U/eYB2ynvX1MKLg3NEp01zUo67eJgowjV+g3zF3XtifhoW///Tqqz0GuAk443jol/Ue00SX3k6XgzxbycX9GKR9FmsLaIIowvz991eJyL1mgOpzQvLnIL1/EAZi9felFilkrj5JaIA==
COM. 86400 RRSIG DNSKEY 8 1 86400 20200430182421 20200415181921 30909 COM. Zkw5YJzP75sdfLN8kN/y8/ywFX+DvotF6fVdxKQGdmgJyyUnDliP6q0VXqVpHHDwtW2WfOlwskiW6007+MIGqV5VtTL6tGeZLv4hJDYZkAwIrl/xBN+aQmIvan4UdBROkOAnfi5Atf1adX5iCqn1jfIGMXb8kKVMrDuDJc/V6XbXEL8NysnqRQtdC6bVuunDQOg/Edw1Uy+B9ly9njU/EmlISkNZo2jo+cXJBFS+Is/6Xcn04+jkHiSRuAFwaGPxKPLeG92v+5ea7pWXBpSIwiqD7Gp/yJCvCUrRZP8eJcoYGav0TT7Bsp2ml15dV20FmNnBPdTqKZHtT8HAIp60Qw==
KITCHENEROKTOBERFEST NS NS1.UNIREGISTRYMARKET.LINK.
KITCHENEROKTOBERFEST NS NS2.UNIREGISTRYMARKET.LINK.
KITCHENFLOORTILE NS NS1.UNIREGISTRYMARKET.LINK.
KITCHENFLOORTILE NS NS2.UNIREGISTRYMARKET.LINK.
KITCHENTABLESET NS NS1.UNIREGISTRYMARKET.LINK.
KITCHENTABLESET NS NS2.UNIREGISTRYMARKET.LINK.
KITEPICTURES NS NS1.UNIREGISTRYMARKET.LINK.
KITEPICTURES NS NS2.UNIREGISTRYMARKET.LINK.
BOYSBOXERS NS NS1.UNIREGISTRYMARKET.LINK.
BOYSBOXERS NS NS2.UNIREGISTRYMARKET.LINK.
What you are interested in is the lines that contain NS in second position. In this example KITCHENEROKTOBERFEST.COM has two name servers declared. Since this is the .com zone file, .com has to be assumed when the name does not end with a dot. So you have to filter on those lines and remove duplicates. You should end up with about 140 million .com domain names. Definitely not 352 million - either you have duplicates or you have imported the wrong data.
That gives you an idea of how crowded the .com zone is. Don't be surprised that any name that remotely makes sense is already registered.
When you have loaded the data to a table, the remaining problem is indexing the data to achieve good performance while making queries. I understand this issue has now been addressed so I won't be making further suggestions on that.
One thing you can do to highlight pure dictionary domains is to do a JOIN of two tables: your domain table and the keyword table. This should need indexes on both sides to run optimally, and I don't have details about your table structure. Or try each keyword by one by one against the domain table in a loop (can use a cursor).
I too feel like this is a whole new question.
I am diagnosing some api on the server, and I am using the IIS log but what it offers me is fine but I need two more variables that are the use of Memory and processor, I have researched and I find nothing that helps me to place those fields additional in the IIS log. Currently it shows me this and a field that I add
date time s-sitename s-computername s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs-version cs(User-Agent) cs(Referer) cs-host sc-status sc-substatus sc-win32-status sc-bytes cs-bytes time-taken x-perfomance
2020-02-13 19:54:14 W3SVC1 usuarioFake ::1 GET /api/fake/prueba api-version=1 0 - ::1 HTTP/1.1 PostmanRuntime/7.22.0 - localhost 200 0 0 1638 330 4422 x-perfomance
I am using an older version of both Nutch (Nutch 1.4) and Solr (3.4.0) as I had installation issues with later versions. After installing I ran a crawl and now I wish to dump crawled URLs into a text file. These are the available options on Nutch 1.4 :
Abhijeet#Abhijeet /home/apache-nutch-1.4-bin/runtime/local/bin
$ ./nutch
Usage: nutch [-core] COMMAND
where COMMAND is one of:
crawl one-step crawler for intranets
readdb read / dump crawl db
mergedb merge crawldb-s, with optional filtering
readlinkdb read / dump link db
inject inject new urls into the database
generate generate new segments to fetch from crawl db
freegen generate new segments to fetch from text files
fetch fetch a segment's pages
parse parse a segment's pages
readseg read / dump segment data
mergesegs merge several segments, with optional filtering and slicing
updatedb update crawl db from segments after fetching
invertlinks create a linkdb from parsed segments
mergelinkdb merge linkdb-s, with optional filtering
solrindex run the solr indexer on parsed segments and linkdb
solrdedup remove duplicates from solr
solrclean remove HTTP 301 and 404 documents from solr
parsechecker check the parser for a given url
indexchecker check the indexing filters for a given url
domainstats calculate domain statistics from crawldb
webgraph generate a web graph from existing segments
linkrank run a link analysis program on the generated web graph
scoreupdater updates the crawldb with linkrank scores
nodedumper dumps the web graph's node scores
plugin load a plugin and run one of its classes main()
junit runs the given JUnit test
or
CLASSNAME run the class named CLASSNAME
Most commands print help when invoked w/o parameters.
Expert: -core option is for developers only. It avoids building the job jar,
instead it simply includes classes compiled with ant compile-core.
NOTE: this works only for jobs executed in 'local' mode
There are 2 options - readdb and readlinkdb. Which one of these two do I need to run? Also the format of the two commands are as stated respectively:
For readdb
Abhijeet#Abhijeet /home/apache-nutch-1.4-bin/runtime/local/bin
$ ./nutch readdb
cygpath: can't convert empty path
Usage: CrawlDbReader <crawldb> (-stats | -dump <out_dir> | -topN <nnnn> <out_dir> [<min>] | -url <url>)
<crawldb> directory name where crawldb is located
-stats [-sort] print overall statistics to System.out
[-sort] list status sorted by host
-dump <out_dir> [-format normal|csv ] dump the whole db to a text file in <out_dir>
[-format csv] dump in Csv format
[-format normal] dump in standard format (default option)
-url <url> print information on <url> to System.out
-topN <nnnn> <out_dir> [<min>] dump top <nnnn> urls sorted by score to <out_dir>
[<min>] skip records with scores below this value.
This can significantly improve performance.
For readlinkdb
Abhijeet#Abhijeet /home/apache-nutch-1.4-bin/runtime/local/bin
$ ./nutch readlinkdb
cygpath: can't convert empty path
Usage: LinkDbReader <linkdb> (-dump <out_dir> | -url <url>)
-dump <out_dir> dump whole link db to a text file in <out_dir>
-url <url> print information about <url> to System.out
I am a confused as to how to use these two commands correctly. An example would be of great help.
Edit :
So I have successfully ran the readdb option and have obtained the following result :
http://www.espncricinfo.com/ Version: 7
Status: 2 (db_fetched)
Fetch time: Sat Apr 15 20:40:38 IST 2017
Modified time: Thu Jan 01 05:30:00 IST 1970
Retries since fetch: 0
Retry interval: 2592000 seconds (30 days)
Score: 1.0042857
Signature: b7324a43f084e5b291ec56ccfb552a2a
Metadata: _pst_: success(1), lastModified=0
http://www.espncricinfo.com/afghanistan-v-ireland-2016-17/content/series/1040469.html Version: 7
Status: 2 (db_fetched)
Fetch time: Sat Apr 15 20:43:03 IST 2017
Modified time: Thu Jan 01 05:30:00 IST 1970
Retries since fetch: 0
Retry interval: 2592000 seconds (30 days)
Score: 0.080714285
Signature: f3bf66dc7c6cd440ee01819b29149140
Metadata: _pst_: success(1), lastModified=0
http://www.espncricinfo.com/afghanistan-v-ireland-2016-17/engine/match/1040485.html Version: 7
Status: 1 (db_unfetched)
Fetch time: Thu Mar 16 20:43:51 IST 2017
Modified time: Thu Jan 01 05:30:00 IST 1970
Retries since fetch: 0
Retry interval: 2592000 seconds (30 days)
Score: 0.0014285715
Signature: null
Metadata:
But on the other hand running the readlinkdb option dumps an empty file. Any ideas on what could be going wrong.
This is my readlinkdb command : ./nutch readlinkdb myCrawl2/linkdb -dump myCrawl2/LinkDump
i have a problem for getting reports in piwik from a date range about 30 days on dashboard , visitors and actions tabs. when i want to do this this error occures :
Oops… there was a problem during the request. Maybe the server had a temporary issue, or maybe you requested a report with too much data. Please try again. If this error occurs repeatedly please contact your Piwik administrator for assistance.
i did archiving with below command :
/usr/bin/php /var/www/html/piwik/console core:archive --url=http://myip/piwik/
and it resolved issue just on dashboard , and it still show that error when i want to get reports for a date range about 30 days on visitors and actions tabs.
when i set the date range to smaller ranges for example about 15 days it is ok and does not shows any error.
i have installed piwik on RHEL with php 5.3.3 and mysql 5.1 .
can anyone help me how to fix this problem.
thanks
I have fixed this issue. It is related with data format. After changing the date format, i could able to load my page with out issue. I have added the error.log for the reference.
Hope this will help to someone who is seeking help.
You can change the particular table column "date format" by using the below query:
UPDATE table_name
SET field_name = replace(same_field_name, 'unwanted_text', 'wanted_text')
[Sat Sep 12 12:03:37.124105 2015] [:error] [pid 21414] [client 192.168.1.22:50006] Error in Piwik: Date format must be: YYYY-MM-DD, or 'today' or 'yesterday' or any keyword supported by the strtotime function (see http://php.net/strtotime for more information): -62169984000, referer: http://192.168.1.20/AP_Enterprise/index.php?module=MultiSites&action=index&idSite=1&period=day&date=yesterday
i increased memory_limit from 128M to 512M and max_execution_time from 30 to 120 in my php.ini and the issue resolved.
this may be useful for someone with same problem.
thanks
This error usually means, that there was a timeout. Try to adjust your configs. The reason for timeouts is the fact, that date range reports are not pre-archived.
On the other hand, it's recommended to use PHP 5.4+ and MySQL 5.5+ if the performance is important for you.
I am running Apache2 on Linux (Ubuntu 9.10).
I am trying to monitor the load on my server using mod_status.
There are 2 things that puzzle me (see cut-and-paste below):
The CPU load is reported as a ridiculously small number,
whereas, "uptime" reports a number between 0.05 and 0.15 at the same time.
The "requests/sec" is also ridiculously low (0.06)
when I know there are at least 10 requests coming in per second right now.
(You can see there are close to a quarter million "accesses" - this sounds right.)
I am wondering whether this is a bug (if so, is there a fix/workaround),
or maybe a configuration error (but I can't imagine how).
Any insights would be appreciated.
-- David Jones
- - - - -
Current Time: Friday, 07-Jan-2011 13:48:09 PST
Restart Time: Thursday, 25-Nov-2010 14:50:59 PST
Parent Server Generation: 0
Server uptime: 42 days 22 hours 57 minutes 10 seconds
Total accesses: 238015 - Total Traffic: 91.5 MB
CPU Usage: u2.15 s1.54 cu0 cs0 - 9.94e-5% CPU load
.0641 requests/sec - 25 B/second - 402 B/request
11 requests currently being processed, 2 idle workers
- - - - -
After I restarted my Apache server, I realized what is going on. The "requests/sec" is calculated over the lifetime of the server. So if your Apache server has been running for 3 months, this tells you nothing at all about the current load on your server. Instead, reports the total number of requests, divided by the total number of seconds.
It would be nice if there was a way to see the current load on your server. Any ideas?
Anyway, ... answered my own question.
-- David Jones
Apache status value "Total Accesses" is total access count since server started, it's delta value of seconds just what we mean "Request per seconds".
There is the way:
1) Apache monitor script for zabbix
https://github.com/lorf/zapache/blob/master/zapache
2) Install & config zabbix agentd
UserParameter=apache.status[*],/bin/bash /path/apache_status.sh $1 $2
3) Zabbix - Create apache template - Create Monitor item
Key: apache.status[{$APACHE_STATUS_URL}, TotalAccesses]
Type: Numeric(float)
Update interval: 20
Store value: Delta (speed per second) --this is the key option
Zabbix will calculate the increment of the apache request, store delta value, that is "Request per seconds".