I have an existing Azure web application backed by an Azure SQL db. My plan is to utilize this same db in new mobile applications I am building, however I made a design decision originally that doesn't work with Azure Mobile Services. I made my keys in the existing database integer ID's and as of recently to utilize a db within the Mobile service it needs ID's that are String GUID's. I have over 200 users entered in my existing database with other associated tables all tied to these ID's.
My question is, is there a feature or methodology for converting all of these integer keys to string keys without dropping all of their data and requiring everyone to manually go in and re set things up again?
My database knowledge is limited but from all I've seen, Azure Mobile Service requires that the keys be strings now and there isn't a work around for it.
Any help is much appreciated, Thanks!
To change the datatype of a column in SQL, just simply run the following command:
ALTER TABLE table_name
ALTER COLUMN column_name column_type
For example, assume your table name is table1 and the column is called keys,
ALTER TABLE table1
ALTER COLUMN keys varchar(10)
Related
I have a huge order table in Azure SQL. I have one boolean field "IsOrderActive" to separate hot and cold orders. Is it possible to automatically transfer cold data to a separate database with Azure SQL?
One way to accomplish required task is to divide the order table into two using T-SQL command then transfer the table with cold data in different database (different server) using SSMS.
Please follow the repro steps done by me.
Create a table
create table hotcoldtable (orderID int, IsOrderActive char(3))
Inserted demo data into the table
insert into hotcoldtable
values (1,'yes')
,(2,'no')
,(3,'yes')
,(4,'yes')
,(5,'no')
,(6,'no')
,(7,'yes')
Divide the table into cold and hot data tables using below commands
cold data table - select OrderID, IsOrderActive into coldtable from hotcoldtable where IsOrderActive = 'no'
hot data table - select OrderID, IsOrderActive into coldtable from hotcoldtable where IsOrderActive = 'yes'
You can see two new tables in your database.
In SQL Server Management Studio (SSMS), login to your Azure SQL Server. Fill the details and click on Connect.
Left click on database name where you have order tables and click on Generate Scripts...
Select Select specific database objects and mark the objects for which you want to create script as shown in below image.
Set the below settings.
Review the details and click on Next. This will generate your script.
Go to the location where your script got saved. Open the file in any editor and copy the script.
Now in Azure Portal, go to the database where you want to transfer the cold data table. Go the the Query Editor and paste the copied script in the white space. Run the script and you will get the tables in this database as shown below.
Are you referring to SQL Server Stretch Database to Azure? Check this out https://www.mssqltips.com/sqlservertip/5526/how-to-setup-and-use-a-sql-server-stretch-database
If you are interested in saving space by archiving the cold data, you can use two separate tables in the same or different databases. The thing to note is you should use columnstore index for the archive(cold) table. Depending upon your data, you should be able to achieve between 30%-60% data compression.
However, this can't be done without running some queries. But it can be automated using Azure workbooks.
I built a similar kind of functionality that helped me save 58% space in Azure SQL database.
Please comment if this is something you feel might help. I can share more details about this.
Database sharding seems like a possible solution for the scenario where cold orders can be put on Azure Serverless databases that have auto-pause and auto-resume capabilities where you can save when they are not in use, only paying for storage used. Azure SQL Database provides a good number of tools here to support sharding.
Documentation on Azure Search says I can use Azure Sql server as a datasource(https://azure.microsoft.com/en-us/documentation/articles/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers-2015-02-28/). Can I do the same with an on-premise SQL server?
I have a typical relational structure like
User Table -> Address Table
User Table -> UserDetails table etc..
All linked to each other via foreign keys. My search should end up with an UserId, so I can link to my UserDetailsPage.aspx?UserId=xxx
What will be the best suggested way to build the datasource? Should I Create a view and apply change tracking on it? or Should I create a different datasource for each table and sync the concerned index?
Please shed some light on best practices in a typical relational database scenario.
you would need to allow the IP address of your search service to connect to your on-prem DB.
In terms of view vs. multiple indexers targeting the same index - both approaches might work. What info will your users be searching on - address, details, or both? If it's only one of those, then you wouldn't have to index both tables.
Keep in mind that if you decide to index a view joining both tables, you won't be able to use SQL integrated change tracking, and will have to rely on a rowversion or timestamp column in the view.
HTH
I found SimpleMembershipProvider to be pretty neat and productive.
I was wondering if there is a way to control the generate table/column names/datatypes.
You can control the table name for the user table and the column names for the user id and user name columns in it. You specify those in the InitializeDatabaseConnection method. The SimpleMembershipProvider expects the UserId column to be an IDENTITY column. It uses ##IDENTITY to obtain the ID of newly created records. Currently, the SimpleMembershipProvider only works with SQL Server (Express or Full) or SQL Compact 4.0 databases.
You can't change the schema of the membership or roles tables. The SQL for managing accounts and using those tables is hard-coded into the SimpleMemberhipProvider.
I am creating an application in Microsoft Access. This is for a small database that the customer will run on a desktop. No network of any kind will be involved. All the necessary files to use the database must be on a single desktop computer.
I want to deliver the app to my customer in stages. Most likely I will email the .accdb file to the customer. How do I deliver an update and maintain any data already entered by the customer? Updates may include changes to the table structure as well as to forms.
The answers given to my original question address the issue of changing forms and other UI elements. However, what if I want to add a table or add column to an existing one? How do I seamlessly deliver such changes while preserving as much data as possible on the user's end?
Split the database and the interface into separate files. Google should have plenty of information as this is typical for MS Access apps.
Here are a few resources to get you started:
How to manually split a Access database in Microsoft Access
Splitting an Access Database, Step by Step
You absolutely MUST (!) split your database into two parts. A backend part storing the tables ("the database") and a frontend containing the forms, reports, queries and application logic ("the application"). Link the tables from the backend to the fontend.
The frontend might also contain tables with control paramameters, report dictionaries etc., but no data that your customer enters!
Newer versions of Access have a database splitting wizard.
You might need a code that automatically links the backend to the fontend on the customers site.
UPDATE
You have two possibilities to alter the schema of your database on the customers PC.
1) Do the Updates through the DAO (or ADOX) object models. e.g.
Set tdf = db.CreateTableDef("tblNew")
tdf.Fields.Append tdf.CreateField("fieldname", dbText)
...
db.TableDefs.Append tdf
2) Use DDL queries
CREATE TABLE MyNewTable (
ID AUTOINCREMENT,
Textfield TEXT(50),
LongField LONG,
...,
CONSTRAINT PK_MyNewTable PRIMARY KEY (ID)
)
Or
ALTER TABLE SomeExistingTable ADD COLUMN Newcolumn Text(50)
Actually I want to migrate a large dataset to another database which already has some data. Data Schema is same in Both DB. Scenario is that my client has application that already running in production and he had given me some new requirements to implement that. after implementation he want to test new requirements on temporary production server for acceptant testing on 2 Locations. So that I have attached existing database to new production server. Now I want to write a DB script that migrate data of location between before and after acceptance testing. My problem is that Ticket ID of my table has identity and running application on both data servers will insert same TicketIDs. Now when I migrate the data, there is a conflict of Primary Key. My schema of parent table is as follow.
TicketID, Identity(1,1) int
LocationID int
Problem varchar(500)
IssueDate DateTime
Another issue is that Ticket ID is print on Customer Receipt and client don't want to change Ticket No.
Please suggest me solutions of this problem.
One Solution is that to add a column OldTicketID but for that I need to change my application code and I don't want to change that there are many child table of that.
You can change the identity column to only generate odd numbers on the old dataset (identity(1,2)) and only even numbers on the new dataset (identity(seed,2)).
Seed should be set to the highest ticketid that's currently in the production system, so there wouldn't be any conflicts with the ids.