I have created a Blob storage in which 1000s of .pdf files stored. I have verified for 600 documents, It was working fine. so I started for all documents stored in the blobs. I saw after 16 hours it was still running but there is no process document count, so I deleted the service. It was successfully deleted. Now I have created new service but its not working any way.
Confguration
Related
I created a secondary content database for my SharePoint site.
In SQL Server, I set the Maximum File Size to 100 MB for the first content database.
Then I started adding documents to the site and I expected SharePoint start using the second content db when the first one reaches to 100 MB.
But surprisingly, it kept using the first content db, regardless of site limit. And now the size of the first content db is beyond 100 MB.
Does anyone can explain, why it did not work as I expected?
I am working on a Web application based on EF with over 1 GB seeded data. The application is hosted in Azure with Bizspark subscription account.
I created a App Service Plan with the Web application associated with an App Service sometime back. I started uploading data to Sql Server but this failed. I realized that the default size was 1GB so I upgraded the plan to a Standard Plan with 10 DTU and 10 GB yesterday and uploaded the data around 5 days back.
After which I due to certain issues, I wiped out the App Service Plan and created a new one. SQL Server size and setup was not modified.
I created a new plan and uploaded the application and observed the following -
Database tables got wiped out
Database prizing structure was reset to Basic
I upgraded the database plan once again to 10 GB and 10 DTU yesterday night. I see that the change has not taken affect yet.
How long does it take get the size fixed?
Will the tables have to be recreated?
9/11
I just tried uploading data via bcp tool. But I got the following error:
1000 rows sent to SQL Server. Total sent: 51000
Communication link failure
Text column data incomplete
Communication link failure
TCP Provider: An existing connection was forcibly closed by the remote host.
Communication link failure
This is new as yesterday before I changed the db size I got the following error:
9/10
1000 rows sent to SQL Server. Total sent: 1454000
The database 'db' has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions.
BCP copy in failed
I don't understand the inconsistency in the failure message or even that the upload failed for the same data file.
Regards,
Lalit
Scaling up a database to a higher service tier should not take more than a few minutes from Basic to Standard. The schemas and table inside the database are left unchanged.
You may want to look into the Activity log of your Azure server to understand who initiated the scale down from Standard to Basic. Furthermore, you may want to turn on the Auditing feature to understand all the operations that are performed on your database.
On connectivity issues, you can start looking at this documentation page. It also looks like you have inserted rows several times into your database through the BCP command and this causes a space issue for the Basic tier.
I wish to have large number (e.g. million) of log files on a system. But OS has limit on opened files. It is not efficient to create million files in single folder.
Is there ready solution, framework or database that will create log files and append data to log files in efficient manner?
I can imagine various techniques to optimize management of large number of log files but there might something that does that out of box.
e.g. I wish that log file was re-created every day or when it reach 50MB. Old log files must be stored. e.g uploaded to Amazon S3.
I can imagine that log database writes all logs in single file but later processes it appends records in millions of files.
May be there is special file system that is good for such task. I can't find anything. I am sure there might be solution.
PS I wish to run logging on single server. I say 1 million because it is more then default limit on opened files. 1 million files 1MB is 1TB and it could be stored on regular harddrive.
I look for existing solution before I will write my own. I am sure there might a set of logging servers. I just do not know how to search for them.
I would start thinking of a Cassandra of Hadoop as a store for log data and eventually if you want these data in a form of a files write a procedure that will make a select on one of these databases and place them in formatted files.
I have uploaded the MS-Access database at a shared drive location in a Windows folder. For couple of days, the database works fine and then it automatically starts creating backup copies of the database every time users try to use the database. While the backup copies are created the size of parent database gets reduced from 10 Mb to 150-200 Kb.
When users try to open the database, they get the message -"Unrecognized database format '\10.10.5.7\Database\DB-R.accdb'
Any suggestions!!
Online searches show this could be related to:
1. 64bit of Access vs 32bit version
2. The version of access you are running, if it is not patched
See related question:
Simular Stackoverflow Question
We have a architecture for our ERP customers,where by a customer can have multiple databases each running at different locations.Customer have a Head office database,where data from this different databases is accumulated on running basis.We have a file based approach currently,where we make files for all the database changes in a particular format and then upload the files to Head office location on running basis.At the head office there is a program running all the time.As soon as a file is uploaded on HO FTP,and Head office exe catches it.It downloads it and update the head office database based on some location id from where data is received.
This approach has been working fine from last 10-12 years,but now we have started facing issues..As number of locations has increased more then 100 for each customer making data flow of more then 4-5 lacs on daily basis.
Issue is with the Headoffice exe and databaseupdation,as number of files and amount of data to be updated/inserted is too much.
I have been searching for a proper and scalable solution for this functional issue..
May be replication or some other approach can help.
Help and suggestions are appreciated.
You could use SymmetricDS to synchronize the databases and consolidate data at the head office. It is an open source replication server that captures changes and sends them periodically over a web-based protocol to target databases. It was designed to work even when bandwidth is low and it has automatic recovery if the network is spotty. The data can be transformed and enriched, so you can add a location ID on the fly to identify the customer of the data. It's been deployed to production to sync large numbers (in the thousands) of databases, so a lot of work has gone into scalability. The project development is also sponsored by a commercial company, JumpMind, who is interested in its long term success and provides commercial products and support for it.