Error in SQL Server to WCF development - sql-server

I am testing a DB that have two tables (Satellite and Channel) to be exposed as I need using WCF. fortunately, I tried everything I know and online for more that I week now and I can't solve the problem.
This is the service contract IService.cs
[ServiceContract]
public interface IService
{
[OperationContract]
List<Satalite> SelectSatalite(int satNum);
[OperationContract]
List<Satalite> SataliteList();
[OperationContract]
List<Channel> ChannelList(int satNum);
[OperationContract]
String Sat(int satNum);
}
And this is the Service.svc.cs file
public class Service : IService
{
DataDbDataContext DbObj = new DataDbDataContext();
public List<Satalite> SataliteList()
{
var satList = from r in DbObj.Satalites
select r;
return satList.ToList();
}
public List<Satalite> SelectSatalite(int satNum)
{
var satList = from r in DbObj.Satalites
where r.SateliteID == satNum
select r;
return satList.ToList();
}
public List<Channel> ChannelList(int satNum)
{
var channels = from r in DbObj.Channels
where r.SateliteID == satNum
select r;
return channels.ToList();
}
public String Sat(int satNum)
{
Satalite satObj = new Satalite();
satObj = DbObj.Satalites.Single(p => p.SateliteID == satNum);
return satObj.Name;
}
}
Whenever I try to run the first three I got an error when testing them using wcftestclient.exe, the last one works with no issues.
The underlying connection was closed: The connection was closed
unexpectedly.
Server stack trace:
at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException
webException, HttpWebRequest request, HttpAbortReason abortReason)
at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan
timeout)
at System.ServiceModel.Channels.RequestChannel.Request(Message message,
TimeSpan timeout)
at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message
message, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannel.Call(String action,
Boolean oneway, ProxyOperationRuntime operation, Object[] ins,
Object[] outs, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage
methodCall, ProxyOperationRuntime operation)
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage
message)
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage
reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData&
msgData, Int32 type) at IService.SelectSatalite(Int32 satNum)
at ServiceClient.SelectSatalite(Int32 satNum)
Inner Exception: The underlying connection was closed: The connection
was closed unexpectedly.
at System.Net.HttpWebRequest.GetResponse()
at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan
timeout)
What I understand is that the error happens for the custom classes which are the DB tables, if I used a known type by the .net compiler (ex. int or string) it will work with no problems. Fortunately, I didn't find a solution.

The error appears to be one of two reasons:
a timeout since you're returning too much data, e.g. the selection of the data from the database takes too long for the service method to complete in time
or:
the message size is too large, because you're selecting too much data, and thus the WCF communication aborts before the whole data has been returned
My solution:
don't select all data from the tables! Return only as much data as you can really handle / display, e.g. 10 rows, 20 rows or a maximum of 100 rows....
Try this - if you change your method to:
public List<Satalite> SataliteList(int count)
{
var satList = (from r in DbObj.Satalites
select r).Take(count);
return satList.ToList();
}
Can you call this from the WCF Test Client with e.g. count = 10 or count = 50 ??

Adjusting timeout settings on server and client side will help you.
Server Side adjust the SendTimeout attribute of binding element and on client side adjust the RecieveTimeout attribute of binding element.
Thanks,

Related

Flink JDBC Sink part 2

I have posted a question few days back- Flink Jdbc sink
Now, I am trying to use the sink provided by flink.
I have written the code and it worked as well. But nothing got saved in DB and no exceptions were there. Using previous sink my code was not finishing(that should happen ideally as its a streaming app) but after the following code I am getting no error and the nothing is getting saved to DB.
public class CompetitorPipeline implements Pipeline {
private final StreamExecutionEnvironment streamEnv;
private final ParameterTool parameter;
private static final Logger LOG = LoggerFactory.getLogger(CompetitorPipeline.class);
public CompetitorPipeline(StreamExecutionEnvironment streamEnv, ParameterTool parameter) {
this.streamEnv = streamEnv;
this.parameter = parameter;
}
#Override
public KeyedStream<CompetitorConfig, String> start(ParameterTool parameter) throws Exception {
CompetitorConfigChanges competitorConfigChanges = new CompetitorConfigChanges();
KeyedStream<CompetitorConfig, String> competitorChangesStream = competitorConfigChanges.run(streamEnv, parameter);
//Add to JBDC Sink
competitorChangesStream.addSink(JdbcSink.sink(
"insert into competitor_config_universe(marketplace_id,merchant_id, competitor_name, comp_gl_product_group_desc," +
"category_code, competitor_type, namespace, qualifier, matching_type," +
"zip_region, zip_code, competitor_state, version_time, compConfigTombstoned, last_updated) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)",
(ps, t) -> {
ps.setInt(1, t.getMarketplaceId());
ps.setLong(2, t.getMerchantId());
ps.setString(3, t.getCompetitorName());
ps.setString(4, t.getCompGlProductGroupDesc());
ps.setString(5, t.getCategoryCode());
ps.setString(6, t.getCompetitorType());
ps.setString(7, t.getNamespace());
ps.setString(8, t.getQualifier());
ps.setString(9, t.getMatchingType());
ps.setString(10, t.getZipRegion());
ps.setString(11, t.getZipCode());
ps.setString(12, t.getCompetitorState());
ps.setTimestamp(13, Timestamp.valueOf(t.getVersionTime()));
ps.setBoolean(14, t.isCompConfigTombstoned());
ps.setTimestamp(15, new Timestamp(System.currentTimeMillis()));
System.out.println("sql"+ps);
},
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl("jdbc:mysql://127.0.0.1:3306/database")
.withDriverName("com.mysql.cj.jdbc.Driver")
.withUsername("xyz")
.withPassword("xyz#")
.build()));
return competitorChangesStream;
}
}
You need enable autocommit mode for jdbc Sink.
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl("jdbc:mysql://127.0.0.1:3306/database;autocommit=true")
It looks like SimpleBatchStatementExecutor only works in auto-commit mode. And if you need to commit and rollback batches, then you have to write your own ** JdbcBatchStatementExecutor **
Have you tried to include the JdbcExecutionOptions ?
dataStream.addSink(JdbcSink.sink(
sql_statement,
(statement, value) -> {
/* Prepared Statement */
},
JdbcExecutionOptions.builder()
.withBatchSize(5000)
.withBatchIntervalMs(200)
.withMaxRetries(2)
.build(),
new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
.withUrl("jdbc:mysql://127.0.0.1:3306/database")
.withDriverName("com.mysql.cj.jdbc.Driver")
.withUsername("xyz")
.withPassword("xyz#")
.build()));

Apache Flink JDBC InputFormat throwing java.net.SocketException: Socket closed

I am querying oracle database using Flink DataSet API. For this I have customised Flink JDBCInputFormat to return java.sql.Resultset. As I need to perform further operation on resultset using Flink operators.
public static void main(String[] args) throws Exception {
ExecutionEnvironment environment = ExecutionEnvironment.getExecutionEnvironment();
environment.setParallelism(1);
#SuppressWarnings("unchecked")
DataSource<ResultSet> source
= environment.createInput(JDBCInputFormat.buildJDBCInputFormat()
.setUsername("username")
.setPassword("password")
.setDrivername("driver_name")
.setDBUrl("jdbcUrl")
.setQuery("query")
.finish(),
new GenericTypeInfo<ResultSet>(ResultSet.class)
);
source.print();
environment.execute();
}
Following is the customised JDBCInputFormat:
public class JDBCInputFormat extends RichInputFormat<ResultSet, InputSplit> implements ResultTypeQueryable {
#Override
public void open(InputSplit inputSplit) throws IOException {
Class.forName(drivername);
dbConn = DriverManager.getConnection(dbURL, username, password);
statement = dbConn.prepareStatement(queryTemplate, resultSetType, resultSetConcurrency);
resultSet = statement.executeQuery();
}
#Override
public void close() throws IOException {
if(statement != null) {
statement.close();
}
if(resultSet != null)
resultSet.close();
if(dbConn != null) {
dbConn.close();
}
}
#Override
public boolean reachedEnd() throws IOException {
isLastRecord = resultSet.isLast();
return isLastRecord;
}
#Override
public ResultSet nextRecord(ResultSet row) throws IOException{
if(!isLastRecord){
resultSet.next();
}
return resultSet;
}
}
This works with below query having limit in the row fetched:
SELECT a,b,c from xyz where rownum <= 10;
but when I try to fetch all the rows having approx 1 million of data, I am getting the below exception after fetching random number of rows:
java.sql.SQLRecoverableException: Io exception: Socket closed
at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:101)
at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:199)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:263)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:521)
at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1024)
at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:314)
at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:228)
at oracle.jdbc.driver.ScrollableResultSet.cacheRowAt(ScrollableResultSet.java:1839)
at oracle.jdbc.driver.ScrollableResultSet.isValidRow(ScrollableResultSet.java:1823)
at oracle.jdbc.driver.ScrollableResultSet.isLast(ScrollableResultSet.java:349)
at JDBCInputFormat.reachedEnd(JDBCInputFormat.java:98)
at org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:173)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:559)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketOutputStream.socketWrite0(Native Method)
So for my case, how i can solve this issue?
I don't think it is possible to ship a ResultSet like a regular record. This is a stateful object that internally maintains a connection to the database server. Using a ResultSet as a record that is transferred between Flink operators means that it can be serialized, shipped over the via the network to another machine, deserialized, and handed to a different thread in a different JVM process. That does not work.
Depending on the connection a ResultSet might as well stay on the same machine in the same thread, which might be the case that worked for you. If you want to query a database from within an operator, you could implement the function as a RichMapPartitionFunction. Otherwise, I'd read the ResultSet in the data source and forward the resulting rows.

Specified Cast is not Invalid (Enum with int value, Dapper)

I have a class with a (simple, first cut) implementation of user roles:
class User {
public Role Role { get; set; }
// ...
public User() { this.Role = Role.Normal; }
public void Save() { Membership.CreateUser(...) } // System.Web.Security.Membership
}
enum Role : int {
Invalid = 0,
Normal = 1,
SuperUser = 4096
}
Before adding the role, everything worked fine (if that matters).
Now, when I try to fetch users, this line fails:
toReturn = conn.Query<User>("SELECT TOP 1 * FROM dbo.UserProfile WHERE 1=1");
The stack trace (from ELMAH):
System.Data.DataException: Error parsing column 2 (Role=1 - Int16) ---> System.InvalidCastException: Specified cast is not valid.
at Deserialize06df745b-4fad-4d55-aada-632ce72e3607(IDataReader )
--- End of inner exception stack trace ---
at Dapper.SqlMapper.ThrowDataException(Exception ex, Int32 index, IDataReader reader) in c:\Dev\Dapper\Dapper\SqlMapper.cs:line 2126
at Deserialize06df745b-4fad-4d55-aada-632ce72e3607(IDataReader )
at Dapper.SqlMapper.<QueryInternal>d__d`1.MoveNext() in c:\Dev\Dapper\Dapper\SqlMapper.cs:line 827
at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
at Dapper.SqlMapper.Query[T](IDbConnection cnn, String sql, Object param, IDbTransaction transaction, Boolean buffered, Nullable`1 commandTimeout, Nullable`1 commandType) in c:\Dev\Dapper\Dapper\SqlMapper.cs:line 770
In the database, the column type for Role is smallint.
I'm using Dapper 1.12.1 from NuGet.
Gah. The answer was to make the database and class definitions match.
For smallint (which is what MigratorDotNet generated for me), I needed the enum to derive from short, not int. Everything works now.
Possibly useful Google Code issue: https://code.google.com/p/dapper-dot-net/issues/detail?id=32

NHibernate 2nd level cache with Prevalence

I'm writing a Windows Forms application which needs to store some NHibernate's entities data in a persistent 2nd layer cache. As far as I know, the only 2nd level cache provider which satisfies my app's requirements is Prevalence, but I'm getting an awkward exception when I configure it:
System.ArgumentNullException was unhandled
Message=Value cannot be null.
Parameter name: key
Source=mscorlib
ParamName=key
StackTrace:
at System.Collections.Generic.Dictionary`2.FindEntry(TKey key)
at System.Collections.Generic.Dictionary`2.TryGetValue(TKey key, TValue& value)
at NHibernate.Impl.SessionFactoryObjectFactory.GetNamedInstance(String name)
at NHibernate.Impl.SessionFactoryImpl.GetRealObject(StreamingContext context)
at System.Runtime.Serialization.ObjectManager.ResolveObjectReference(ObjectHolder holder)
at System.Runtime.Serialization.ObjectManager.DoFixups()
at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream)
at Bamboo.Prevalence.Implementation.PendingCommandsEnumerator.NextCommand()
at Bamboo.Prevalence.Implementation.PendingCommandsEnumerator.MoveNext()
at Bamboo.Prevalence.PrevalenceEngine.RecoverCommands(CommandLogReader reader, ExceptionDuringRecoveryHandler handler)
at Bamboo.Prevalence.PrevalenceEngine.RecoverSystem(Type systemType, CommandLogReader reader, ExceptionDuringRecoveryHandler handler)
at Bamboo.Prevalence.PrevalenceEngine..ctor(Type systemType, String prevalenceBase, BinaryFormatter formatter, ExceptionDuringRecoveryHandler handler)
at Bamboo.Prevalence.TransparentPrevalenceEngine..ctor(Type systemType, String prevalenceBase, BinaryFormatter formatter, ExceptionDuringRecoveryHandler handler)
at Bamboo.Prevalence.TransparentPrevalenceEngine..ctor(Type systemType, String prevalenceBase, BinaryFormatter formatter)
at Bamboo.Prevalence.PrevalenceActivator.CreateTransparentEngine(Type systemType, String prevalenceBase, BinaryFormatter formatter)
at Bamboo.Prevalence.PrevalenceActivator.CreateTransparentEngine(Type systemType, String prevalenceBase)
at NHibernate.Caches.Prevalence.PrevalenceCacheProvider.SetupEngine()
at NHibernate.Caches.Prevalence.PrevalenceCacheProvider.Start(IDictionary`2 properties)
at NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners)
at NHibernate.Cfg.Configuration.BuildSessionFactory()
at AcessoDados.DB.Configure() in C:\Users\Herberth\MyProject\DataAccess\DB.cs:line 78
This is only extra code I'm using:
configuration.SessionFactory().Caching.Through<NHibernate.Caches.Prevalence.PrevalenceCacheProvider>().PrefixingRegionsWith("MyRegion").WithDefaultExpiration(60);
It works fine when I comment out this line (without the cache, of course);
Here's the complete code I'm using:
configuration = new Configuration();
var mapper = new ModelMapper();
mapper.AddMappings(Assembly.GetExecutingAssembly().GetExportedTypes());
HbmMapping domainMapping = mapper.CompileMappingForAllExplicitlyAddedEntities();
configuration.DataBaseIntegration(c =>
{
c.Dialect<MySQLDialect>();
c.ConnectionString = #"Server=localhost;Database=mydb;Uid=root;Pwd=mypwd";
c.ConnectionString = DBConnectionStrings.Principal;
c.LogFormattedSql = true;
c.LogSqlInConsole = true;
c.IsolationLevel = System.Data.IsolationLevel.ReadCommitted;
});
configuration.AddMapping(domainMapping);
configuration.Cache(c => { c.UseQueryCache = true; });
configuration.SessionFactory().Caching.Through<NHibernate.Caches.Prevalence.PrevalenceCacheProvider>().PrefixingRegionsWith("MyRegion").WithDefaultExpiration(60);
SessionFactory = configuration.BuildSessionFactory();
All dependencies are in their latest version.
Thanks in advance!
I had this issue because the directories NHibernate.Cache.StandardQueryCache and UpdateTimestampsCache in the executable directory got out of date.
However, that lead to the next issue -- I couldn't install under Program Files because NHibernate attempted to create these directories on first run.

SQL Server CLR Integration enlisting in current transaction

I'm trying to use CLR integration in SQL Server to handle accessing external files instead of storing them internally as BLOBs. I'm trying to figure out the pattern I need to follow to make my code enlist in the current SQL transaction. I figured I would start with the simplest scenario, deleting an existing row, since the insert/update scenarios would be more complex.
[SqlProcedure]
public static void DeleteStoredImages(SqlInt64 DocumentID)
{
if (DocumentID.IsNull)
return;
using (var conn = new SqlConnection("context connection=true"))
{
conn.Open();
string FaceFileName, RearFileName;
int Offset, Length;
GetFileLocation(conn, DocumentID.Value, true,
out FaceFileName, out Offset, out Length);
GetFileLocation(conn, DocumentID.Value, false,
out RearFileName, out Offset, out Length);
new DeleteTransaction().Enlist(FaceFileName, RearFileName);
using (var comm = conn.CreateCommand())
{
comm.CommandText = "DELETE FROM ImagesStore WHERE DocumentID = " + DocumentID.Value;
comm.ExecuteNonQuery();
}
}
}
private class DeleteTransaction : IEnlistmentNotification
{
public string FaceFileName { get; set; }
public string RearFileName { get; set; }
public void Enlist(string FaceFileName, string RearFileName)
{
this.FaceFileName = FaceFileName;
this.RearFileName = RearFileName;
var trans = Transaction.Current;
if (trans == null)
Commit(null);
else
trans.EnlistVolatile(this, EnlistmentOptions.None);
}
public void Commit(Enlistment enlistment)
{
if (FaceFileName != null && File.Exists(FaceFileName))
{
File.Delete(FaceFileName);
}
if (RearFileName != null && File.Exists(RearFileName))
{
File.Delete(RearFileName);
}
}
public void InDoubt(Enlistment enlistment)
{
}
public void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}
public void Rollback(Enlistment enlistment)
{
}
}
When I actually try to run this, I get the following exception:
A .NET Framework error occurred during execution of user defined routine or aggregate 'DeleteStoredImages':
System.Transactions.TransactionException: The operation is not valid for the state of the transaction. ---> System.Transactions.TransactionPromotionException: MSDTC on server 'BD009' is unavailable. ---> System.Data.SqlClient.SqlException: MSDTC on server 'BD009' is unavailable.
System.Data.SqlClient.SqlException:
at System.Data.SqlServer.Internal.StandardEventSink.HandleErrors()
at System.Data.SqlServer.Internal.ClrLevelContext.SuperiorTransaction.Promote()
System.Transactions.TransactionPromotionException:
at System.Data.SqlServer.Internal.ClrLevelContext.SuperiorTransaction.Promote()
at System.Transactions.TransactionStatePSPEOperation.PSPEPromote(InternalTransaction tx)
at System.Transactions.TransactionStateDelegatedBase.EnterState(InternalTransaction tx)
System.Transactions.TransactionException:
at System.Transactions.TransactionState.EnlistVolatile(InternalTransaction tx, IEnlistmentNotification enlistmentNotification, EnlistmentOptions enlistmentOptions, Transaction atomicTransaction)
at System.Transactions.TransactionStateSubordinateActive.EnlistVolatile(InternalTransaction tx, IEnlistmentNotification enlistmentNotification, EnlistmentOptions enlistmentOptions, Transaction atomicTransaction)
at System.Transactions.Transaction.EnlistVolatile(IEnlistmentNotification enlistmentNotification, EnlistmentOptions enlistmentOptions)
at ExternalImages.StoredProcedures.DeleteTransaction.Enlist(String FaceFileName, String RearFileName)
at ExternalImages.StoredProcedures.DeleteStoredImages(SqlInt64 DocumentID)
. User transaction, if any, will be rolled back.
The statement has been terminated.
Can anyone explain what I'm doing wrong, or point me to an example of how to do it right?
You have hopefully solved this by now, but in case anyone else has a similar problem: the error message you are getting suggests that you need to start the Distributed Transaction Coordinator service on the BD009 machine (presumably your own machine).
#Aasmund's answer regarding the Distributed Transaction Coordinator might solve the stated problem, but that still leaves you in a non-ideal state: You are tying a transaction, which locks the ImagesStore table (even if it is just a RowLock), to two file system operations? And you need to BEGIN and COMMIT the transaction outside of this function (since that isn't being handled in the presented code).
I would separate those two pieces:
Step 1: Delete the row from the table
and then, IF that did not error,
Step 2: Delete the file(s)
In the scenario where Step 1 succeeds but then Step 2, for whatever reason, fails, do one or both of the following:
return an error status code and keep track of which DocumentIDs got an error when attempting to delete the file in a status table. You can use that to manually delete the files and/or debug why the error occurred.
create a process that can run periodically to find and remove unreferenced files.

Resources