I'm new to play framework and I'm trying to get it to work with RequireJs. When I run my app in dev mode everything runs fine, but when I set application.mode=prod and start the server with play start I'm running into problems.
The browser receives an HTTP404 when attempting to load /assets/javascripts-min/home/main.js.
Here's my Build.scala file
import sbt._
import Keys._
import play.Project._
import com.google.javascript.jscomp._
import java.io.File
object MyBuild extends Build {
val appDependencies = Seq (
jdbc,
anorm,
cache
)
val appVersion = "0.0.1"
val appName = "TodoList"
// set clojure compiler options so it won't choke on modern js frameworks
val root = new java.io.File(".")
val defaultOptions = new CompilerOptions()
defaultOptions.closurePass = true
defaultOptions.setProcessCommonJSModules(true)
defaultOptions.setCommonJSModulePathPrefix(root.getCanonicalPath + "/app/assets/javascripts/")
defaultOptions.setLanguageIn(CompilerOptions.LanguageMode.ECMASCRIPT5)
CompilationLevel.WHITESPACE_ONLY.setOptionsForCompilationLevel(defaultOptions)
val main = play.Project(appName, appVersion, appDependencies).settings(
(Seq(requireJs += "home/main.js", requireJsShim := "home/main.js") ++ closureCompilerSettings(defaultOptions)): _*
)
}
It turns out that I had a conflicting build.sbt (in the root directory) and Build.scala (in the project directory) files. Once I removed the sbt file, the requireJs optimization began working as expected
Related
I have following code to test flink and hive integration. I submit the application via flink run -m yarn-cluster ..... The hiveConfDir is a local directory that resides on the machine that I submit the application, I would ask how flink can able to read this local directory when the main class is running in the cluster(yarn-cluster)? Thanks!
package org.example.app
import org.apache.flink.streaming.api.scala._
import org.apache.flink.table.api.bridge.scala._
import org.apache.flink.table.catalog.hive.HiveCatalog
import org.apache.flink.types.Row
object FlinkBatchHiveTableIntegrationTest {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(1)
val tenv = StreamTableEnvironment.create(env)
val name = "myHiveCatalog"
val defaultDatabase = "default"
//how does flink could read this local directory
val hiveConfDir = "/apache-hive-2.3.7-bin/conf"
val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir)
tenv.registerCatalog(name, hive)
tenv.useCatalog(name)
val sql =
"""
select * from testdb.t1
""".stripMargin(' ')
val table = tenv.sqlQuery(sql)
table.printSchema()
table.toAppendStream[Row].print()
env.execute("FlinkHiveIntegrationTest")
}
}
Looks I find the answer. The application is submitted with flink run -m yarn-cluster.By this way, the main method of the application is running at the client side where the hive is installed,so the hive conf dir could be read.
I want to connect an external realm database to my Android project. Realm is already set up in build.gradle. I copied test database file: "realmdata.realm" into "raw" folder in "res".
Running the project gives me the error:
Caused by: io.realm.exceptions.RealmFileException: Could not resolve the path to the asset file: realmdata.realm Kind: ACCESS_ERROR.
...
d.androidrealmtestapp.MainActivity.onCreate(MainActivity.kt:40)
...
which corresponds to code line:
realm = Realm.getInstance(c)
No matter if I change filename or position in "res" directory the output is the same. After printing RealmConfiguration the output is: "realmFileName : default.realm" Why "default.realm" since I gave the asset file name: "realmdata.realm"? What am I doing wrong? So my question is how to properly connect an external realm file to the project? I am a beginner in kotlin and realm.
import android.support.v7.app.AppCompatActivity
import android.os.Bundle
import android.support.v7.widget.LinearLayoutManager
import android.support.v7.widget.RecyclerView
import io.realm.Realm
import io.realm.RealmConfiguration
import io.realm.annotations.RealmModule
class MainActivity : AppCompatActivity() {
private lateinit var mainRecycler : RecyclerView
lateinit var text: String
private lateinit var realm : Realm
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
println("--------------------------------------------- ")
print(application.assets.toString())
Realm.init(this)
var c = RealmConfiguration.Builder()
.assetFile("realmdata.realm")
.modules(MyModule())
.readOnly()
.build()
println("--------------------------------------------- ")
println(" c configuration builder file:")
println(c)
println("--------------------------------------------- ")
Realm.setDefaultConfiguration(c)
realm = Realm.getInstance(c)
realm.beginTransaction()
print ("realm ...")
realm.commitTransaction()
mainRecycler = findViewById(R.id.main_recycler)
mainRecycler.layoutManager = LinearLayoutManager(this)
mainRecycler.adapter = MainAdapter()
}
#RealmModule(classes = arrayOf(RealmModel::class ))
private class MyModule {}
I copied test database file: "realmdata.realm" into "raw" folder in
"res"
You need to copy your database to assets folder
To create assets folder folow this.
I need to develop a graph using gephi toolkit in Java. I have my node and edge data in CSV format. All the toolkit tutorials have syntax to import a gml or gexf format. Since I have 2 CSV files can anyone tell me the syntax of importing this csv's in Java using gephi toolkit jar?
It's better for you to try to import csv file by gephi-tool-kit and gephi at the same time to see if there is any problem, now I will show you the way I found to import csv file by gephi-toolkit step by step.
Here I use NetBeans IDE 8.2 with the gephi-toolkit-0.9.2-all.jar and refer to several demos from github.
After create a new Java project and a new Java class like here my test class named Transfer95, of course you also need to add the gephi-toolkit jar file to your library. According to the tutorial and javadoc of gephi-toolkit, there is much changes to use the new toolkit, it seems that the import methods can be used to import several different types of file like gexf and csv file.
The first several lines of code are the same common code to init project and get controller or container.
//Init a project - and therefore a workspace
ProjectController pc = Lookup.getDefault().lookup(ProjectController.class);
pc.newProject();
Workspace workspace = pc.getCurrentWorkspace();
//Get controllers and models
ImportController importController = Lookup.getDefault().lookup(ImportController.class);
//Get models and controllers for this new workspace - will be useful later
GraphModel graphModel = Lookup.getDefault().lookup(GraphController.class).getGraphModel();
//Import file
Container container;
try {
File file_node = new File(getClass().getResource("/resourceason2/club_1_1995.csv").toURI());
container = importController.importFile(file_node);
container.getLoader().setEdgeDefault(EdgeDirectionDefault.DIRECTED); //Force DIRECTED
container.getLoader().setAllowAutoNode(true); //create missing nodes
container.getLoader().setEdgesMergeStrategy(EdgeMergeStrategy.SUM);
container.getLoader().setAutoScale(true);
} catch (Exception ex) {
ex.printStackTrace();
return;
}
here I just try to import the node csv file with the same importController method of gexf in demos of ImportExport or HeadlessSimple, and it worked, but here we also need to import edge csv file to the network, and I finally found the answer from demo of GenerateGraph which can append new container to the current workspace, so the code of import csv file after is
//Init a project - and therefore a workspace
ProjectController pc = Lookup.getDefault().lookup(ProjectController.class);
pc.newProject();
Workspace workspace = pc.getCurrentWorkspace();
//Get controllers and models
ImportController importController = Lookup.getDefault().lookup(ImportController.class);
//Get models and controllers for this new workspace - will be useful later
GraphModel graphModel = Lookup.getDefault().lookup(GraphController.class).getGraphModel();
//Import file
Container container,container2;
try {
File file_node = new File(getClass().getResource("/resourceason2/club_1_1995.csv").toURI());
container = importController.importFile(file_node);
container.getLoader().setEdgeDefault(EdgeDirectionDefault.DIRECTED); //Force DIRECTED
container.getLoader().setAllowAutoNode(true); //create missing nodes
container.getLoader().setEdgesMergeStrategy(EdgeMergeStrategy.SUM);
container.getLoader().setAutoScale(true);
File file_edge = new File(getClass().getResource("/resourceason2/transfer_1_1995.csv").toURI());
container2 = importController.importFile(file_edge);
container2.getLoader().setEdgeDefault(EdgeDirectionDefault.DIRECTED); //Force DIRECTED
container2.getLoader().setAllowAutoNode(true); //create missing nodes
container2.getLoader().setEdgesMergeStrategy(EdgeMergeStrategy.SUM);
container2.getLoader().setAutoScale(true);
} catch (Exception ex) {
ex.printStackTrace();
return;
}
here we need to append the new container to the same workspace like
//Append imported data to GraphAPI
importController.process(container, new DefaultProcessor(), workspace);
importController.process(container2, new AppendProcessor(), workspace); //Use AppendProcessor to append to current workspace
and finally check if your network is correct
//See if graph is well imported
DirectedGraph graph = graphModel.getDirectedGraph();
System.out.println("Nodes: " + graph.getNodeCount());
System.out.println("Edges: " + graph.getEdgeCount());
and that's why I advise you to test the csv file import by gephi and gephi-toolkit at the same time. And that's my dummy code,hopefully it will help you and there may be better answer.
References
Javadoc
gephi toolkit demo
gephi forum
I have a main grails application that use a private Grails plugin mySearch.
the plugin Search has a file in Search\web-app\filters.json.
when the application is run with run-app : the ressource file can be accessed with :
def ressource = new File( org.codehaus.groovy.grails.plugins.GrailsPluginUtils.getPluginDirForName('mySearch')?.file?.absolutePath
+"/web-app/filters.json").text
but It doesn't work when the app is deployed in tomcat7.
I'have tried using pluginManager.getGrailsPlugin('mySearch')
but I can't access the obsolute path of the resource.
After many attempts to resolve it, I found this workaround.
It looks messy but I didn't found anything else shorter and sweeter :
// work only for a plugin installed in app deployed from a war.
String getRessourceFile(String relativePath) throws FileNotFoundException{
def pluginDir = pluginManager.getGrailsPlugin('MyPlugin')?.getPluginPath()
def pluginFileSystemName = pluginManager.getGrailsPlugin('MyPlugin')?.getFileSystemName()
def basedir = grailsApplication.parentContext.getResource("/")?.file
if(basedir.toString().endsWith("web-app")){
basedir = basedir.parent
}
ressourcefile = "$basedir$pluginDir/$relativePath"
ressource = new File(ressourcefile)?.text
}
I have written a selenium application using webdriver. I wish to run it on a remote server. When I do that by logging into the server via putty (along with Xming), the selenium tries opening the browser on the server only and load the pages through the external display. However in doing that, it takes a lot of time than if I would have been able to get the browser open on my localhost only (and not the server). Is it possible for such thing to happen or opening on the server only is the only option (which is painfully slow). Kindly tell me if I am missing something as well.
Thanks in advance.
Try using Selenium Grid, instead of Putty, to run your Selenium application on a remote server. The Selenium website has an excellent Quick Start guide for using the Selenium Grid: http://code.google.com/p/selenium/wiki/Grid2.
You can run Selenium with a"headless" driver, HtmlUnitDriver, that does not actually open a browser:
http://code.google.com/p/selenium/wiki/HtmlUnitDriver
Note: HtmlUnitDriver will accept an argument, so that it can emulate a specific driver.
#Lori
I implemented the code but it still tries opening it from putty so takes a lot of time to get the work done. The code is as follows: 'code'
import sys
from scrapy.spider import BaseSpider
from scrapy.http import FormRequest
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from scrapy.http import Request
from selenium import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
class DmozSpider(BaseSpider):
name = "linkedin_crawler"
#defence news
global company
global query
companyFilename = '<filename>'
f=open(companyFilename,"r")
f.seek(0)
company = f.readline().strip()
f.close()
queryFilename = '/var/www/Symantec/recon/recon/' +company+ '/Spider/LinkedIn/query.txt'
f = open(queryFilename)
f.seek(0)
query=f.readline().strip()
f.close()
start_urls = ['https://www.linkedin.com/uas/login'];
def __init__(self):
BaseSpider.__init__(self)
capabilities = webdriver.DesiredCapabilities()
self.selenium = webdriver.Remote(command_executor = 'http://localhost:5000/wd/hub', desired_capabilities = capabilities.FIREFOX)
def __del__(self):
self.selenium.quit()
def parse(self, response):
sel= self.selenium
sel.get(response.url)
global query
elem1 = sel.find_element_by_name("session_key")
elem2 = sel.find_element_by_name("session_password")
elem1.send_keys("myemailid")
elem2.send_keys("mypassword")
elem2.send_keys(Keys.RETURN)
return Request(query, callback=self.page_parse)
def page_parse(self,response):
global query
global company
sel= self.selenium
sel.get(query)
for i in xrange(10):
#for i in xrange(5):
nameFilename = ''
#print hxs
nlist = sel.find_elements_by_xpath('//ol[#class="search-results"]/li/div/h3/a')
fh = open(nameFilename,"a")
for j in xrange(len(nlist)):
url = nlist[j].get_attribute("href").encode('utf-8')
name = nlist[j].text.encode('utf-8')
fh.write(name)
fh.write("<next>")
fh.write(url)
fh.write('\n')
fh.close()
next = sel.find_elements_by_xpath('//a[#class="page-link"]')
next[0].click()
time.sleep(5)
To tun this script on server, I am using putty to fire the command. But then it again uses Xming to open the browser which makes the process slow again. So, how to run the script without opening the browser on my local machine via Xming so that this does not become the bottleneck. Thanks