i add a plesk subscription with the following xml code, this add subscription without any hosting type. i want the hosting type be "website". please help me
<packet>
<webspace>
<add>
<gen_setup>
<name>ggg.com</name>
<owner-login>mmm</owner-login>
<ip_address>111.111.111.111</ip_address>
<status>0</status>
</gen_setup>
<plan-name>1m</plan-name>
</add>
</webspace>
</packet>
the correct code is:
<packet>
<webspace>
<add>
<gen_setup>
<name>{domainName}</name>
<owner-login>{username}</owner-login>
<ip_address>111.111.111.111</ip_address>
</gen_setup>
<hosting>
<vrt_hst>
<property>
<name>ftp_login</name>
<value>ftp_{username}</value>
</property>
<property>
<name>ftp_password</name>
<value>{pass}</value>
</property>
<ip_address>111.111.111.111</ip_address>
</vrt_hst>
</hosting>
<plan-name>{plan}</plan-name>
</add>
Related
I am using WSO2 inbound endpoint to fetch a file from an FTP server. And I know how to get the file name back. Now my question is how to get the file uploaded time back (or the last modified time)?
This is the code to get the file name.
<property expression="get-property('transport', 'FILE_NAME')" name="ftp.var.filename"
xmlns:ns="http://org.apache.synapse/xsd"
xmlns:ns2="http://org.apache.synapse/xsd"/>
I think there should be a similar code to get the timestamp of the file.
With the following property, you will be able to get the last modified time of the file polled from the inbound endpoint.
`<property expression="get-property('transport', 'LAST_MODIFIED')" name="ftp.var.last.modified.time" xmlns:ns="http://org.apache.synapse/xsd"/>`
Add this to the relevant sequence to further process and following is a sample sequence in which the file name and the last modified time is logged.
<?xml version="1.0" encoding="UTF-8"?>
<sequence name="fileSequence" onError="fault" xmlns="http://ws.apache.org/ns/synapse">
<log level="custom">
<property expression="get-property('transport', 'FILE_NAME')"
name="ftp.var.filename" xmlns:ns="http://org.apache.synapse/xsd"/>
<property
expression="get-property('transport', 'LAST_MODIFIED')"
name="ftp.var.last.modified.time" xmlns:ns="http://org.apache.synapse/xsd"/>
</log>
</sequence>
Please check whether this meets your requirement and please refer [1] to further clarify this.
[1]-https://github.com/wso2/wso2-synapse/blob/master/modules/transports/core/vfs/src/main/java/org/apache/synapse/transport/vfs/VFSTransportListener.java#L767
I have the same problem. I use just this command for whole process:
crawl urls/ucuzcumSeed.txt ucuzcum http://localhost:8983/solr/ucuzcum/ 10
crawl <seedDir> <crawlID> [<solrUrl>] <numberOfRounds>
By the way I'm using 2.3.1 version of Nutch and 5.2.1 version of Solr. The problem is that I cannot fetch whole web site for just this command. I suppose numberofRounds parameter doesnt work. At first run nutch just find 1 url for fetch and generate and parse it. After at the second step it can get more urls. In this case, this means nutch stops in the end of the first iteration. But it should continue according to my command. What should I do to crawl a whole website with nutch?
nutch-site.xml :
<property>
<name>http.agent.name</name>
<value>MerveCrawler</value>
</property>
<property>
<name>storage.data.store.class</name>
<value>org.apache.gora.hbase.store.HBaseStore</value>
<description>Default class for storing data</description>
</property>
<property>
<name>plugin.includes</name>
<value>protocol-httpclient|urlfilter-regex|index-(basic|more)|query-(basic|site|url|lang)|indexer-solr|nutch-extensionpoints|protocol-httpclient|urlfilter-rege$
</property>
<property>
<name>http.content.limit</name>
<value>-1</value><!-- No limit -->
<description>The length limit for downloaded content using the http://
protocol, in bytes. If this value is nonnegative (>=0), content longer
than it will be truncated; otherwise, no truncation at all. Do not
confuse this setting with the file.content.limit setting.
</description>
</property>
<property>
<name>fetcher.verbose</name>
<value>true</value>
<description>If true, fetcher will log more verbosely.</description>
</property>
<property>
<name>db.max.outlinks.per.page</name>
<value>100000000000000000000000000000000000000000000</value>
<description>The maximum number of outlinks that we'll process for a page.
If this value is nonnegative (>=0), at most db.max.outlinks.per.page outlinks
will be processed for a page; otherwise, all outlinks will be processed.
</description>
</property>
<property>
<name>db.ignore.external.links</name>
<value>false</value>
<description>If true, outlinks leading from a page to external hosts
will be ignored. This is an effective way to limit the crawl to include
only initially injected hosts, without creating complex URLFilters.
</description>
</property>
<property>
<name>db.ignore.internal.links</name>
<value>false</value>
<description>If true, when adding new links to a page, links from
the same host are ignored. This is an effective way to limit the
size of the link database, keeping only the highest quality
links.
</description>
</property>
<property>
<name>fetcher.server.delay</name>
<value>10</value>
<description>The number of seconds the fetcher will delay between
successive requests to the same server. Note that this might get
overriden by a Crawl-Delay from a robots.txt and is used ONLY if
fetcher.threads.per.queue is set to 1.
</description>
</property>
<property>
<name>file.content.limit</name>
<value>-1</value>
<description>The length limit for downloaded content using the file
protocol, in bytes. If this value is nonnegative (>=0), content longer
than it will be truncated; otherwise, no truncation at all. Do not
confuse this setting with the http.content.limit setting.
</description>
</property>
<property>
<name>http.timeout</name>
<value>100000000000000000000000000000000000</value>
<description>The default network timeout, in milliseconds.</description>
</property>
<property>
<name>http.timeout</name>
<value>100000000000000000000000000000000000</value>
<description>The default network timeout, in milliseconds.</description>
</property>
<property>
<name>generate.max.count</name>
<value>100000000</value>
<description>The maximum number of urls in a single
fetchlist. -1 if unlimited. The urls are counted according
to the value of the parameter generator.count.mode.
</description>
</property>
There's several reasons why the crawl might not get further e.g. robots.txt directives. Look at the logs and / or the content of the crawl table to get a better idea of what the problem might be
I have some static files (some are HTML, some are images and some are pure data files - like .csv or .xls etc) that I want to share through the ESB. I can make that happen if I run a separate HTTP server that will receive the request for these through ESB. Instead, I like to handle it in ESB itself. Based on the incoming request URL (say HTTP GET request - http://myesb.com:8280/getstatus.html ), I like to pull these static files from the local server's folders.
I tried the VFS method and it looks like has built in "refresh" mechanism that I don't want. I want to "GET" these data only when the clients are requesting for it.
In short, I like to have a simple mapping done like this:
http://myesb.com:8280/getstatus.html will fetch the contents of /var/myapp/status/appstatus.html file.
Update
I did the following sequence - don't know how to make it work :(
<sequence xmlns="http://ws.apache.org/ns/synapse" name="app1status">
<in>
<log level="custom">
<property name="Reached app1status page - in" value="app1 Status"/>
<property name="transport.vfs.ContentType" value="text/html"/>
<property xmlns:ns="http://org.apache.synapse/xsd" name="TRPURL:" expression="get-property('From')"/>
</log>
<property name="transport.vfs.FileURI" value="vfs:file:///opt/platform/traffic/app1status1.html" scope="transport" type="STRING"/>
<property name="HTTP_METHOD" value="GET"/>
<property name="ClientApiNonBlocking" action="remove" scope="axis2"/>
<header name="To" action="remove"/>
<property name="RESPONSE" value="true" scope="default" type="STRING"/>
</in>
<out>
<log level="custom">
<property name="::::::Out:::::Reached app1status" value=" From OUT"/>
<property name="messageType" value="text/html"/>
<property name="ContentType" value="text/html"/>
</log>
<send/>
</out>
</sequence>
Note the following in the <in> mediator:
<property name="transport.vfs.FileURI" value="vfs:file:///opt/platform/traffic/app1status1.html" scope="transport" type="STRING"/>
My intent is to get the content of the file appstatus1.html retrieved and send back as the response. But I am not able to get the contents retrieved and added to the "RESPONSE"
Let me know how it can be done.
Thanks for your time.
Define a RESTAPI and based on GET/PUT pull or post data to your server.
Can anyone tell me if I can associate a static index field for Liferay using the solr-web.plugin? Is there a way to define a static index in solr?
I need something similar to the following configuration in Nutch
<property>
<name>index.static</name>
<value>source:nutch</value>
</property>
This will add the field "source" as an index and its value as "nutch" to all documents in Nutch. Anything similar to this for Liferay + Solr?
Not sure for Liferay configuration, however you can add a default value in the schema.xml which will be applied to documents.
<field name="source" type="string" indexed="true" stored="true" default="Nutch" />
I have the following JPA code, with all the values checked (ticket contains a valid bean, it ends without exception, etc.) It is executed, it does not throw any exceptions, yet in the end no data is written into the table.
I tried also retrieving a bean from the table, it also "works" (it is empty, so no data is returned).
The setup is
JBoss 6.1 Final
SQLServer 2008 Express (driver SQL JDBC 3 from MS)
The persistence code:
public String saveTicket() {
System.out.println("Controller saveTicket() ");
EntityManagerFactory factory = Persistence.createEntityManagerFactory("GesMan"); /* I know it would be better to share a single instance of factory, this is just for testing */
EntityManager entityMan = factory.createEntityManager();
entityMan.persist(this.ticket);
entityMan.close();
}
The persistence unit is
<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="GesMan" transaction-type="JTA">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:/GesManDS</jta-data-source>
<class>es.caib.gesma.gesman.Ticket</class>
<properties>
<property name="hibernate.dialect" value="org.hibernate.dialect.SQLServerDialect"/>
<property name="hibernate.transaction.manager_lookup_class"
value="org.hibernate.transaction.JBossTransactionManagerLookup"/>
<property name="hibernate.show_sql" value="true"/>
</properties>
</persistence-unit>
</persistence>
The datasource
<datasources>
<local-tx-datasource>
<jndi-name>GesManDS</jndi-name>
<connection-url>jdbc:sqlserver://spsigeswnt14.caib.es:1433;DatabaseName=TEST_GESMAN</connection-url>
<driver-class>com.microsoft.sqlserver.jdbc.SQLServerDriver</driver-class>
<user-name>thisis</user-name>
<password>notthepassword</password>
<check-valid-connection-sql>SELECT * FROM dbo.Ticket</check-valid-connection-sql>
<metadata>
<type-mapping>MS SQLSERVER</type-mapping>
</metadata>
</local-tx-datasource>
</datasources>
call entityMan.flush() or transaction.commit() befor closing it, otherwise it will discard all changes queued on close.
In the end it looks like I was using the wrong approach.... In JBoss you can`t (better said, I could not get to) access JPA directly (as you would do in JSE).
I ended creating an EJB (with transactions) and passing all JPA logic there.
PS: Of course, if I am wrong please tell me (now it is more of an academic issue, but still I want to know)