Hello everyware i'm envelop an application for extract data from local ip database, where application start the application crash at press button from extract an article, the interessing code error is a subcall at mainactivity2:
public List<String> getAllLabels(){
List<String> labels = new ArrayList();
String selectQuery = "SELECT * FROM " + KEY_ARTICOLO;
Cursor cursor = db.rawQuery(selectQuery, null);
if (cursor.moveToFirst()) {
do {
labels.add(cursor.getString(1));
}while (cursor.moveToNext());
}
return labels;
}
Error is:
Process: spa.a1926.federighi.blancmariclo, PID: 8203
java.lang.NullPointerException: Attempt to invoke virtual method 'android.database.Cursor android.database.sqlite.SQLiteDatabase.rawQuery(java.lang.String, java.lang.String[])' on a null object reference
at spa.a1926.federighi.blancmariclo.MainActivity2$GestioneDB.getAllLabels(MainActivity2.java:131)
at spa.a1926.federighi.blancmariclo.MainActivity2$1.onClick(MainActivity2.java:50)
at android.view.View.performClick(View.java:4756)
at android.view.View$PerformClick.run(View.java:19749)
at android.os.Handler.handleCallback(Handler.java:739)
at android.os.Handler.dispatchMessage(Handler.java:95)
at android.os.Looper.loop(Looper.java:135)
at android.app.ActivityThread.main(ActivityThread.java:5221)
at java.lang.reflect.Method.invoke(Native Method)
at java.lang.reflect.Method.invoke(Method.java:372)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:899)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:694)
10-30 10:53:22.815 1536-3316/system_process E/eglCodecCommon: glUtilsParamSize: unknow param 0x00008cdf
10-30 10:53:22.815 1536-3316/system_process E/eglCodecCommon: glUtilsParamSize: unknow param 0x00008824
The call in main(mainactivity2) is:
public void onClick(View view) {
GestioneDB db = new GestioneDB(getApplicationContext());
List<String> parolemenu = db.getAllLabels();
ArrayAdapter dataAdapter = new ArrayAdapter(MainActivity2.this, android.R.layout.simple_spinner_item, parolemenu);
dataAdapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item);
spinner.setAdapter(dataAdapter);
}
});
Related
I have just started to use FluentMigration for my current project. I wrote my first migration but I have some trouble writing a unit test for it.
Here is some sample code:
private ServiceProvider CreateServiceProvider()
{
return new ServiceCollection()
.AddLogging(lb => lb.AddFluentMigratorConsole())
.AddFluentMigratorCore()
.ConfigureRunner(
builder => builder
.AddSQLite()
.WithGlobalConnectionString("Data Source=:memory:;Version=3;New=True;")
.WithMigrationsIn(typeof(MigrationOne).Assembly))
.BuildServiceProvider();
}
private void PerformMigrateUp(IServiceScope scope)
{
var runner = scope.ServiceProvider.GetRequiredService<IMigrationRunner>();
runner.MigrateUp(1);
}
[Test]
public void ShouldHaveTablesAfterMigrateUp()
{
var provider = this.CreateServiceProvider();
using (var scope = provider.CreateScope())
{
this.PerformMigrateUp(scope);
// here I'd like to test if tables have been created in the database by the migration
}
}
I don't know how (or if it is possible) to access the current database connection so I can perform a query. Any suggestions would be helpful. Thanks.
Ok, I found a solution. I have to use the Process method of the runner's processor to perform my own sql query.
It looks like this:
private ServiceProvider CreateServiceProvider()
{
return new ServiceCollection()
.AddLogging(lb => lb.AddFluentMigratorConsole())
.AddFluentMigratorCore()
.ConfigureRunner(
builder => builder
.AddSQLite()
.WithGlobalConnectionString(#"Data Source=:memory:;Version=3;New=True;")
.WithMigrationsIn(typeof(MigrationDate20181026113000Zero).Assembly))
.BuildServiceProvider();
}
[Test]
public void ShouldHaveNewVersionAfterMigrateUp()
{
var serviceProvider = this.CreateServiceProvider();
var scope = serviceProvider.CreateScope();
var runner = scope.ServiceProvider.GetRequiredService<IMigrationRunner>();
runner.MigrateUp(1);
string sqlStatement = "SELECT Description FROM VersionInfo";
DataSet dataSet = runner.Processor.Read(sqlStatement, string.Empty);
Assert.That(dataSet, Is.Not.Null);
Assert.That(dataSet.Tables[0].Rows[0].ItemArray[0], Is.EqualTo("Migration1"));
}
This is an old question but an important one. I find it strange that I couldnt find any documentation on this.
In any case here is my solution which I find to be a bit better as you dont need to rely on the runner. Since you dont need that the options open up hugely for constructor arguments.
Firstly make sure you install Microsoft.Data.Sqlite or you will get a strange error.
SQLite in memory databases exist for as long as the connection does - and 1 database per connection on first glance. Actually though there is a way to share the database between connections as long as at least 1 connection is open at all times according to my experiments. You just need to name it.
https://learn.microsoft.com/en-us/dotnet/standard/data/sqlite/connection-strings#sharable-in-memory
So to begin with I created a connection that will stay open until the test finishes. It will be named using Guid.NewGuid() so that subsequent connections will work as expected.
var dbName = Guid.NewGuid().ToString();
var connectionString = $"Data Source={dbName};Mode=Memory;Cache=Shared";
var connection = new SqliteConnection(connectionString);
connection.Open();
After that the crux of running the migrations is the same as previously answered but the connection string uses the named database:
var sp = services.AddFluentMigratorCore()
.ConfigureRunner(fluentMigratorBuilder => fluentMigratorBuilder
.AddSQLite()
.WithGlobalConnectionString(connectionString)
.ScanIn(AssemblyWithMigrations).For.Migrations()
)
.BuildServiceProvider();
var runner = sp.GetRequiredService<IMigrationRunner>();
runner.MigrateUp();
Here is a class I use to inject a connection factory everywhere that needs to connect to the database for normal execution:
internal class PostgresConnectionFactory : IConnectionFactory
{
private readonly string connectionString;
public PostgresConnectionFactory(string connectionString)
{
this.connectionString = connectionString;
}
public DbConnection Create()
{
return new NpgsqlConnection(connectionString);
}
}
I just replaced this (all hail dependency inversion) with:
internal class InMemoryConnectionFactory : IConnectionFactory
{
private readonly string connectionstring;
public InMemoryConnectionFactory(string connectionstring)
{
this.connectionstring = connectionstring;
}
public DbConnection Create()
{
return new SqliteConnection(connectionstring);
}
}
where the connection string is the same named one I defined above.
Now you can simply use that connection factory anywhere that needs to connect to the same in memory database, and since we can now connect multiple times possibilities for integration testing open up.
Here is the majority of my implementation:
public static IDisposable CreateInMemoryDatabase(Assembly AssemblyWithMigrations, IServiceCollection services = null)
{
if (services == null)
services = new ServiceCollection();
var connectionString = GetSharedConnectionString();
var connection = GetPersistantConnection(connectionString);
MigrateDb(services, connectionString, AssemblyWithMigrations);
services.AddSingleton<IConnectionFactory>(new InMemoryConnectionFactory(connectionString));
return services.BuildServiceProvider()
.GetRequiredService<IDisposableUnderlyingQueryingTool>();
}
private static string GetSharedConnectionString()
{
var dbName = Guid.NewGuid().ToString();
return $"Data Source={dbName};Mode=Memory;Cache=Shared";
}
private static void MigrateDb(IServiceCollection services, string connectionString, Assembly assemblyWithMigrations)
{
var sp = services.AddFluentMigratorCore()
.ConfigureRunner(fluentMigratorBuilder => fluentMigratorBuilder
.AddSQLite()
.WithGlobalConnectionString(connectionString)
.ScanIn(assemblyWithMigrations).For.Migrations()
)
.BuildServiceProvider();
var runner = sp.GetRequiredService<IMigrationRunner>();
runner.MigrateUp();
}
private static IDbConnection GetPersistantConnection(string connectionString)
{
var connection = new SqliteConnection(connectionString);
connection.Open();
return connection;
}
Then here is a sample test:
public Test : IDisposable {
private readonly IDisposable _holdingConnection;
public Test() {
_holdingConnection = CreateInMemoryDatabase(typeof(MyFirstMigration).Assembly);
}
public void Dispose() {
_holdingConnection.Dispose();
}
}
You may notice that the static factory returns a custom interface. Its just an interface that extends the normal tooling I inject to repositories, but also implements IDisposable.
Untested bonus for integration testing where you will have a service collection created via WebApplicationFactory or TestServer etc:
public void AddInMemoryPostgres(Assembly AssemblyWithMigrations)
{
var lifetime = services.BuildServiceProvider().GetService<IHostApplicationLifetime>();
var holdingConnection= InMemoryDatabaseFactory.CreateInMemoryDapperTools(AssemblyWithMigrations, services);
lifetime.ApplicationStopping.Register(() => {
holdingConnection.Dispose();
});
}
I'm executing queries periodically (by a scheduler) using my Spring Boot application
application.properties
src_mssqlserver_url=jdbc:sqlserver://192.168.0.1;databaseName=Test;
src_mssqlserver_username=tester
src_mssqlserver_password=tester1
src_mssqlserver_driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver
Datasource and JdbcTemplate Bean
#Primary
#Bean(name = "src_mssqlserver")
#ConfigurationProperties(prefix = "spring.ds_mssqlserver")
public DataSource srcDataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(env.getProperty("src_mssqlserver_driverClassName"));
dataSource.setUrl(env.getProperty("src_mssqlserver_url"));
dataSource.setUsername(env.getProperty("src_mssqlserver_username"));
dataSource.setPassword(env.getProperty("src_mssqlserver_password"));
return dataSource;
}
#Bean(name = "srcJdbcTemplate")
public JdbcTemplate srcJdbcTemplate(#Qualifier("src_mssqlserver") DataSource dsSrcSqlServer) {
return new JdbcTemplate(dsSrcSqlServer);
}
Usage: This method is called from a scheduler with list of items to process (normally 1000 records), this process runs in an hour once.
#Autowired
#Qualifier("srcJdbcTemplate")
private JdbcTemplate srcJdbcTemplate;
public void batchInsertUsers(final List<User> users) {
String queryInsert = "INSERT INTO [User] ([Name]"
+ " , [Created_Date]"
+ " , [Notes])"
+ " VALUES (?, SYSDATETIMEOFFSET(), ?)";
srcJdbcTemplate.batchUpdate(queryInsert, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
User user = users.get(i);
ps.setString(1, user.getName());
ps.setString(2, user.getNotes());
}
#Override
public int getBatchSize() {
return sites.size();
}
});
I'm getting warnings from database administrator that my code keeping too much connections open. Please share some standard and workable way to handle such situation.
Thanks.
DriverManagerDataSource is NOT meant for production, it opens and closes a connection each time it needs one.
Use a connection pool like c3p0DataSource.
I have a multi-tenant application where one DB per tenant with one Master DB is configured. I load all the data sources in applications something like this :
#ConfigurationProperties(prefix = "spring.datasource")
#Bean
public DataSource dataSource() {
if(LOGGER.isInfoEnabled())
LOGGER.info("Loading datasources ...");
DataSource ds = null;
JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
// load MASTER datasource
ds = dataSourceLookup.getDataSource(properties.getJndiName());
// load other TENANTs DB details
JdbcTemplate jdbcTemplate = new JdbcTemplate(ds);
List<GroupConfig> groupConfigs = jdbcTemplate.query(
"select * from master.tblTenant where IsActive=1 and ConfigCode in ('DB_URL','DATASOURCE_CLASS','USER_NAME','DB_PASSWORD') order by 2",
new ResultSetExtractor<List<GroupConfig>>() {
public List<GroupConfig> extractData(ResultSet rs) throws SQLException, DataAccessException {
List<GroupConfig> list = new ArrayList<GroupConfig>();
while (rs.next()) {
GroupConfig groupConfig = new GroupConfig();
groupConfig.setGroupConfigId(rs.getLong(1));
groupConfig.setGroupCode(rs.getString(2));
groupConfig.setConfigCode(rs.getString(3));
groupConfig.setConfigValue(rs.getString(4));
groupConfig.setIsActive(rs.getBoolean(5));
list.add(groupConfig);
}
return list;
}
});
int propCount = 1;
Map<String, Map<String, String>> groups = new HashMap<String, Map<String, String>>();
Map<String, String> temp = new HashMap<String, String>();
for (GroupConfig config : groupConfigs) {
temp.put(config.getConfigCode(), config.getConfigValue());
if (propCount % 4 == 0) {
groups.put(config.getGroupCode(), temp);
temp = new HashMap<String, String>();
}
propCount++;
}
// Create TENANT dataSource
Map<Object, Object> resolvedDataSources = new HashMap<Object, Object>();
for (String tenantId : groups.keySet()) {
Map<String, String> groupKV = groups.get(tenantId);
DataSourceBuilder dataSourceBuilder = new DataSourceBuilder(this.getClass().getClassLoader());
dataSourceBuilder.driverClassName(groupKV.get("DATASOURCE_CLASS")).url(groupKV.get("DB_URL"))
.username(groupKV.get("USER_NAME")).password(groupKV.get("DB_PASSWORD"));
//System.out.println(dataSourceBuilder.findType()); //class org.apache.tomcat.jdbc.pool.DataSource
if (properties.getType() != null) {
dataSourceBuilder.type(properties.getType());
}
if(LOGGER.isInfoEnabled())
LOGGER.info("Building datasource : "+tenantId);
resolvedDataSources.put(tenantId, dataSourceBuilder.build());
}
resolvedDataSources.put("MASTER", ds);
MultitenantDataSource dataSource = new MultitenantDataSource();
dataSource.setTargetDataSources(resolvedDataSources);
dataSource.setDataSourceLookup(dataSourceLookup);
dataSource.afterPropertiesSet();
if(LOGGER.isInfoEnabled())
LOGGER.info("Datasources initialization finished !");
return dataSource;
}
In controller I set respective data source as (similarly for TENANT datasources) :
TenantContext.setCurrentTenant("MASTER");
Issue : On server startup every thing works fine (both MASTER DB and TENANT specific queries), but once the server is idle for some time (few Hours) tenant specific calls starts failing (while MASTER DB connections still works fine) with error :
Could not open JPA EntityManager for transaction; nested exception is javax.persistence.PersistenceException: com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
Please help me to get rid off this exception. Thanks in advance.
I got the problem and solutions as well:
Why tenant connections was getting closed? Because Auto configurations(#ConfigurationProperties(prefix = "spring.datasource")) of spring-boot was not getting applied on the tenant DataSources which I was creating in code.
Resolution- I added new method to set the tomcat connection pool properties:
private DataSource buildDataSource(String driverClass, String url, String user, String pass){
PoolProperties p = new PoolProperties();
p.setUrl(url);
p.setDriverClassName(driverClass);
p.setUsername(user);
p.setPassword(pass);
p.setJmxEnabled(true);
p.setTestWhileIdle(false);
p.setTestOnBorrow(true);
p.setValidationQuery("SELECT 1");
p.setTestOnReturn(false);
p.setValidationInterval(30000);
p.setTimeBetweenEvictionRunsMillis(30000);
p.setMaxActive(100);
p.setInitialSize(10);
p.setMaxWait(10000);
p.setRemoveAbandonedTimeout(60);
p.setMinEvictableIdleTimeMillis(30000);
p.setMinIdle(10);
p.setLogAbandoned(true);
p.setRemoveAbandoned(true);
DataSource datasource = new DataSource();
datasource.setPoolProperties(p);
return datasource;
}
This solved my problem. But I'm curious to know if there is a way to apply AutoConfigurations while creating the objects in spring-boot.
I am trying to make a Standalone Application using SQLite in Unity3D,
I am getting a strange problem.
I created a database using sqliteadmin, and created a Table named Admin, having field: id, email, password.
I am able to Login using email and password but in Unity Edit Mode.
Its working fine but when i build it and then run it, its not working, I have no idea why?
Reference
Here is my code:
using UnityEngine;
using System.Collections;
using Mono.Data.Sqlite;
using System.Data;
using System;
using UnityEngine.UI;
public class DatabaseConnection : MonoBehaviour {
public Text em;
public Text pas;
public static int id;
public static string email ="";
public static string password="";
public static string wrong="Wrong Email/Password !!!";
public Text Wrong;
public GameObject loading;
private ButtonsController bc;
public GameObject loginPanel;
void Start () {
string conn = "URI=file:" + Application.dataPath + "/Database/TMDB.s3db";
IDbConnection dbconn;
dbconn = (IDbConnection)new SqliteConnection (conn);
dbconn.Open ();
IDbCommand dbcmd = dbconn.CreateCommand ();
string sqlQuery = "SELECT id, email, password " + "FROM Admin";
dbcmd.CommandText = sqlQuery;
IDataReader reader = dbcmd.ExecuteReader ();
while (reader.Read()) {
id = reader.GetInt32 (0);
email = reader.GetString(1);
password = reader.GetString(2);
}
reader.Close ();
reader = null;
dbcmd.Dispose ();
dbcmd = null;
dbconn.Close ();
dbconn = null;
loading.SetActive (false);
}
public void login()
{
if ((em.text == email) && (pas.text == password)) {
Debug.Log ("Success");
loading.SetActive (true);
loginPanel.SetActive(false);
Application.LoadLevel(1);
} else {
Debug.Log ("Error");
Wrong.text = wrong.ToString ();
}
}
}
Application.datapath is readonly.
What you need is Application.persistentDataPath
Checkout this link
http://answers.unity3d.com/questions/209108/when-to-use-persistentdatapath-versus-datapath.html
Create StreamingAssets folder into your Assets, and use this connection string:
string conn = "URI=file:" +
System.IO.Path.Combine(Application.streamingAssetsPath, "Database/TMDB.s3db");
Using streaming asset is necessary, it places files into the normal filesystem on the target machine to make them accessible via a pathname.
More info:
https://docs.unity3d.com/Manual/StreamingAssets.html
dude ,,, just check the files bro,,, after building the database is empty so go and replace the database file with the one u been working on with the same database name .
As the title says, I have the numerical Site ID, I need a way to get the corresponding GlobalId.
I know there are tables with matches
https://developer.ebay.com/DevZone/merchandising/docs/CallRef/Enums/GlobalIdList.html
https://developer.ebay.com/DevZone/merchandising/docs/Concepts/SiteIDToGlobalID.html
but I need a programmatic way to do it.
Your three options:
Hash Map implementation
import java.util.HashMap;
public class Mappings{
public static void main(String []args){
HashMap myMap = new HashMap<Integer, String>();
myMap.put(0, "EBAY-US");
myMap.put(2, "EBAY-ENCA");
myMap.put(3, "EBAY-GB");
myMap.put(15, "EBAY-AU");
//...
System.out.println("Given ID 15, it's corresponding Global ID is: " + myMap.get(15));
}
}
SQL Table implementation
//STEP 1. Import required packages
import java.sql.*;
public class JDBCExample {
// JDBC driver name and database URL
static final String JDBC_DRIVER = "com.mysql.jdbc.Driver";
static final String DB_URL = "jdbc:mysql://localhost/STUDENTS";
// Database credentials
static final String USER = "username";
static final String PASS = "password";
public static void main(String[] args) {
Connection conn = null;
Statement stmt = null;
try{
//STEP 2: Register JDBC driver
Class.forName("com.mysql.jdbc.Driver");
//STEP 3: Open a connection
System.out.println("Connecting to a selected database...");
conn = DriverManager.getConnection(DB_URL, USER, PASS);
System.out.println("Connected database successfully...");
//STEP 4: Execute a query
System.out.println("Inserting records into the table...");
stmt = conn.createStatement();
String sql = "INSERT INTO Mappings " +
"VALUES (0, 'EBAY-US')";
stmt.executeUpdate(sql);
sql = "INSERT INTO Mappings " +
"VALUES (2, 'EBAY-ENCA')";
stmt.executeUpdate(sql);
sql = "INSERT INTO Mappings " +
"VALUES (3, 'EBAY-GB')";
stmt.executeUpdate(sql);
sql = "INSERT INTO Mappings " +
"VALUES (15, 'EBAY-AU')";
stmt.executeUpdate(sql);
System.out.println("Inserted records into the table...");
sql = "SELECT siteID, globalID FROM Mappings";
ResultSet rs = stmt.executeQuery(sql);
//STEP 5: Extract data from result set
while(rs.next()){
//Retrieve by column name
int siteId = rs.getInt("siteID");
int globalId = rs.getInt("globalID");
//Display values
System.out.print("siteId: " + siteId);
System.out.print(", globalId: " + globalId);
}
rs.close();
}catch(SQLException se){
//Handle errors for JDBC
se.printStackTrace();
}catch(Exception e){
//Handle errors for Class.forName
e.printStackTrace();
}finally{
//finally block used to close resources
try{
if(stmt!=null)
conn.close();
}catch(SQLException se){
}// do nothing
try{
if(conn!=null)
conn.close();
}catch(SQLException se){
se.printStackTrace();
}//end finally try
}//end try
System.out.println("Goodbye!");
}//end main
}//end JDBCExample
Kimono implementation
If you don't want to have local storage for the Site ID <=> Global ID mappings, as mentioned in the techcrunch article, "Kimono Is A Smarter Web Scraper That Lets You “API-ify” The Web, No Code Required,":
With Kimono, the end goal is to simplify data extraction so that anyone can manage it.
Then Kimono’s learning algorithm will build a data model involving the items you’ve
selected.
As Kimono's website goes onto mention:
Turn websites into structured APIs from your browser in seconds.