Codeigniter won't connect to one table only - sql-server

Running PHP 5.3 and SQL Server 2008 R2 using sqlsrv driver to connect.
Codeigniter 2.2
So I have been using codeigniter for a couple years now and this is the first time I have run across this problem. I have a table in my database named 'update_times' which is a log table for data updates I load daily, it has around 2000 records in it and is indexed on the columns I query against. My database has 60 or so tables and 'update_times' is the only table I am unable to select anything from with codeigniter.
I have done a bunch of tests:
I ran a record count for every table in the database and every other table was correct except 'update_times' table which returned 0 records
I can query(select) from the table in Management Studio with no problem.
I can also select from update_times table using sqlsrv php function sqlsrv_query, I return records using this method
I am unable to select using active_record select or query method from codeigniter (I tried it in multiple controllers/models)
Here's the weird part, I can insert, update and delete using the active record functions. It is only the select where I am having the issue and only on this table.
I tried rebuilding the indexes and rebuilding the entire table as well but nothing helped. So I am left stumped. I was going to just create a new table with a new name for update_times but I really want to find the problem in CI so that I know what to do if it happens again. It's almost like CI is blocking the select for some reason.
I now have created a table with a different name with that same structure and I am unable to query it the same as the update_times table. I am still stumped.
Here is the table structure of update_times:
CREATE TABLE [dbo].[update_times](
[ut_id] [int] IDENTITY(1,1) NOT NULL,
[table_name] [varchar](50) NOT NULL,
[start_time] [datetime] NOT NULL,
[end_time] [datetime] NOT NULL,
[records] [int] NOT NULL,
[emp_id] [int] NULL,
[dates_requested] [varchar](50) NULL,
PRIMARY KEY CLUSTERED
(
[ut_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
Any help would be great or suggestions on how to narrow the error.

Related

MS SQL removes newly inserted row automatically straight after insert

I have a weird issue. In the MSSQL database I have multiple tables in that database but only two of them does this:
I insert a new row
get row count (which is increased by 1)
get row count again within seconds (this time it's decreased) and the new row is not in the table anymore
The query i use to insert row and get count:
INSERT INTO [dbo].[CSMobileMessages]
([MessageSID],[IssueID],[UserSent])
VALUES
('213',0,'blabla')
SELECT count([IDx])
FROM [dbo].[CSMobileMessages]
The SQL query returns "1 row affected" and i even get back the new row ID as well from the identity column. No errors at all. I checked in profiler which states 1 row inserted successfully and nothing else happened.
The table has no triggers. Index only on identity field (IDx), user used is "sa" with full access. Tried with different user but same happens.
The table is called "CSMobileMessages" so I created a new table:
CREATE TABLE [dbo].[CSMobileMessages2](
[IDx] [int] IDENTITY(1,1) NOT NULL,
[MessageSID] [varchar](50) NULL,
[IssueID] [int] NOT NULL,
[UserSent] [varchar](50) NULL,
CONSTRAINT [PK_CSMobileMessages2] PRIMARY KEY CLUSTERED
(
[IDx] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[CSMobileMessages2] ADD CONSTRAINT [DF_CSMobileMessages2_IssueID] DEFAULT ((0)) FOR [IssueID]
GO
I insert 1000 rows into the new table and it worked. So i delete the old table (CSMobileMessages) and rename the new table from CSMobileMessages2 to CSMobileMessages.
As soon as i do that, the inserted rows gets deleted and i will get the exact same row count for the new table what i had with the old one. Also i can't insert rows anymore. No services or any other software touches this table. However if i restart the server i can insert 1 new row and after it starts happening again.
Edit:
I uses MSSMS and connect to the database remotely but i tried locally on the server as well and same happens. A service used this table but i disabled it when this started few days ago. Before that the service ran happily for 1 year with no issue. I double checked to make sure, the service is turned off and no one connects to that table but me.
Has anyone ever seen this issue before and knows what causes it?
I gave up and the whole database was reset from few days old back up as a last try and it's working now as it supposed to. I don't set this question as answered because even it's fixed the problem I still have no idea what happened exactly as within my 20+ yrs coding i never seen anything like this before.
Thanks for everyone who tried to help with ideas!

MS PowerApps: How to "Patch" a SQL Table with Composite Primary Key

I am relatively new to MS PowerApps
I have a SQL Server Express installed on a onsite server with a Gateway for PowerApps
My SQL Server table has a composite primary key, it is defined as:
CREATE TABLE [GFX_Information].[BusinessParnterAccess]
(
[BpAccesID] [int] IDENTITY(1,1) NOT NULL,
[CreatedDate] [datetime] NOT NULL,
[UpdatedDate] [datetime] NOT NULL,
[LastOperatorID] [int] NOT NULL,
[CreateByID] [int] NOT NULL,
[BPID] [int] NOT NULL,
[AllowedOperatorID] [int] NOT NULL,
[AccessFlag] [varchar](10) NULL,
PRIMARY KEY CLUSTERED ([AllowedOperatorID] ASC, [BPID] ASC)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [GFX_Information].[BusinessParnterAccess]
ADD DEFAULT (GETDATE()) FOR [CreatedDate]
GO
ALTER TABLE [GFX_Information].[BusinessParnterAccess]
ADD DEFAULT (GETDATE()) FOR [UpdatedDate]
GO
I am trying to work out how to "Patch" a new record.
Currently, using the OnVisible event I create a variable to hold the last BpAccesID like this
UpdateContext ({varLastAccessID:First(SortByColumns('[GFX_Information].[BusinessParnterAccess]',"BpAccesID",Descending)).BpAccesID});
I am using a manual set of values for the Patch Command for testing purposes. The Patch command is
Patch('[GFX_Information].[BusinessParnterAccess]',Defaults('[GFX_Information].[BusinessParnterAccess]')
,{BpAccesID:varLastAccessID+1
,CreatedDate: Now()
,UpdatedDate:Now()
,LastOperatorID:4
,CreateByID:4
,BPID:342
,AllowedOperatorID:4
,AccessFlag:"RW" });
However, this does not throw an error I can detect nor can I see what I am missing
Can any one provide any ideas please?
I was reading this, and this is a suggestion is based on my knowledge of SQL Server and a quick read about Patch. It may help you, or may not (I'm sorry). And also just confirming: I'm guessing that the question is "this doesn't create a new row and I cannot see why?"
I would guess that your issue is with BPAccessId. You've set it as an identity: [BpAccesID] [int] IDENTITY(1,1) NOT NULL,
However, you explicitly insert a value into it
Patch('[GFX_Information].[BusinessParnterAccess]',Defaults('[GFX_Information].[BusinessParnterAccess]')
,{BpAccesID:varLastAccessID+1
Of course, you usually cannot insert into an IDENTITY column in SQL Server - you need to set IDENTIY_INSERT on (then off again after you finish). Also, as an aside, one of the reasons for IDENTITY PK columns is to always create a new row with a valid PK. How does the approach above work for concurrency e.g., two users trying to create a new row at the same time?
Anyway, some potential solutions off the top of my head. Once again, this is based off my knowledge of SQL Server only.
Alter the MS Powerapps statement to work with the IDENTITY (I'll leave this up to you) - whether the equivalent of SET IDENTITY_INSERT table ON; or otherwise
Remove the IDENTITY property from BPAccessID (e.g., leave it as a pure int)
Make the Primary Key a composite of all three columns e.g., AllowedOperatorID, BPID, BPAccessID
Make BPAccessID the Primary Key but non-clustered, and make a unique clustered index for AllowedOperatorID, BPID
For the bottom two, as BPAccessID is still an IDENTITY, you'll need to let SQL Server handle calculating the new value.
If you are not using foreign keys to this table, then the bottom two will have similar effects.
However, if there are foreign keys, then the bottom one (a non-clustered PK and clustered unique index on the other two) is probably the closest to your current setup (and is actually what I would typically do in a table structure like yours, regardless of PowerApps or other processing).

Cluster index on varchar on small table

Hy guys,
I inherited a database with the following table with only 200 rows:
CREATE TABLE [MyTable](
[Id] [uniqueidentifier] NOT NULL,
[Name] [varchar](255) NULL,
[Value] [varchar](8000) NULL,
[EffectiveStartDate] [datetime] NULL,
[EffectiveEndDate] [datetime] NULL,
[Description] [varchar](2000) NOT NULL DEFAULT (''),
CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
As you can see there is a Clustered PK on a UniqueIdentifier column. I was doing some performance checks and the most expensive query so far (CPU and IO) is the following:
SELECT #Result = Value
FROM MyTable
WHERE #EffectiveDate BETWEEN EffectiveStartDate AND EffectiveEndDate
AND Name=#VariableName
The query above is encapsulated in a UDF and usually the udf is not called in a select list or where clause, instead it's return is usually assigned to a variable.
The execution plan shows a Clustered Index Scan
Our system is based in a large number of aggregations and math processing in real time. Every time our Web Application refreshes the main page, it calls a bunch of Stored Procedures and UDFs and the query above is run around 500 times per refresh per user.
My question is: Should I change the PK to nonclustered and create a clustered index on the Name, EffectiveStartDate, EffectiveEndDate in a such small table?
No you should not. You can just add another index which will be covering index:
CREATE INDEX [IDX_Covering] ON dbo.MyTable(Name, EffectiveStartDate, EffectiveEndDate)
INCLUDE(Value)
If #VariableName and #EffectiveDate are variables with correct types you should now see index seek.
I am not sure this will help, but you need to try, because index scan of 200 rows is just nothing, but calling it 500 times may be a problem. By the way if those 200 rows are in one page I suspect this will not help. The problem may be somewhere else, like opening a connection 500 times or something like that...

Speed up retrieval of distinct values for dropdown via caching

Overview
In my ASP.Net MVC application, I have several pages that utilize a DataRecord search functionality that is dynamically configured by the site admin to have specific DataRecord fields available as criteria in one of a few different search input types. One of the input types available is a dropdown, which is populated with the distinct DataRecord values of that particular field that are relevant to whatever the search context is.
I'm looking to decrease the amount of time it takes to create these dropdowns, and am open to suggestions.
I'll list out things in the following manner:
SQL Structure
Sample Query
Business Rules
Miscellaneous Info (may or may not be relevant, but I didn't want to rule anything out)
SQL Structure
Listed from greatest to lowest scope, with only relevant fields. Each table has a one to many relationship with the table that follows. Keep in mind these were all created and maintained via EF Code First with Migrations.
CREATE TABLE [dbo].[CompanyInfoes](
[Id] [int] IDENTITY(1,1) NOT NULL,
CONSTRAINT [PK_dbo.CompanyInfoes] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
CREATE TABLE [dbo].[BusinessLines](
[Id] [int] IDENTITY(1,1) NOT NULL,
[Company_Id] [int] NOT NULL,
CONSTRAINT [PK_dbo.BusinessLines] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
ALTER TABLE [dbo].[BusinessLines] WITH CHECK ADD CONSTRAINT [FK_dbo.BusinessLines_dbo.CompanyInfoes_Company_Id] FOREIGN KEY([Company_Id])
REFERENCES [dbo].[CompanyInfoes] ([Id])
ALTER TABLE [dbo].[BusinessLines] CHECK CONSTRAINT [FK_dbo.BusinessLines_dbo.CompanyInfoes_Company_Id]
CREATE TABLE [dbo].[DataFiles](
[Id] [int] IDENTITY(1,1) NOT NULL,
[FileStatus] [int] NOT NULL,
[FileEnvironment] [int] NOT NULL,
[BusinessLine_Id] [int] NOT NULL,
CONSTRAINT [PK_dbo.DataFiles] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
ALTER TABLE [dbo].[DataFiles] WITH CHECK ADD CONSTRAINT [FK_dbo.DataFiles_dbo.BusinessLines_BusinessLine_Id] FOREIGN KEY([BusinessLine_Id])
REFERENCES [dbo].[BusinessLines] ([Id])
ON DELETE CASCADE
ALTER TABLE [dbo].[DataFiles] CHECK CONSTRAINT [FK_dbo.DataFiles_dbo.BusinessLines_BusinessLine_Id]
CREATE TABLE [dbo].[DataRecords](
[Id] [int] IDENTITY(1,1) NOT NULL,
[File_Id] [int] NOT NULL,
[Field1] [nvarchar](max) NULL,
[Field2] [nvarchar](max) NULL,
...
[Field20] [nvarchar](max) NULL,
CONSTRAINT [PK_dbo.DataRecords] PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
ALTER TABLE [dbo].[DataRecords] WITH CHECK ADD CONSTRAINT [FK_dbo.DataRecords_dbo.DataFiles_File_Id1] FOREIGN KEY([File_Id])
REFERENCES [dbo].[DataFiles] ([Id])
ON DELETE CASCADE
ALTER TABLE [dbo].[DataRecords] CHECK CONSTRAINT [FK_dbo.DataRecords_dbo.DataFiles_File_Id1]
Sample Query (as generated by EF)
SELECT [Distinct1].[Field2] AS [Field2]
FROM ( SELECT DISTINCT
[Extent1].[Field2] AS [Field2]
FROM [dbo].[DataRecords] AS [Extent1]
INNER JOIN [dbo].[DataFiles] AS [Extent2] ON [Extent1].[File_Id] = [Extent2].[Id]
WHERE ([Extent2].[BusinessLine_Id] IN (4, 5, 6, 7, 8, 11, 12, 13, 14)) AND (0 = [Extent2].[FileEnvironment]) AND (1 = [Extent2].[FileStatus])
) AS [Distinct1]
Business Rules
The values within the Dropdown should be based on the viewing User's BusinessLine access ([BusinessLine_Id] clause in query), and the current page that the search is being used in conjunction with ([FileEnvironment] and [FileStatus]).
Which of the 20 DataRecords Fields should be presented as a Dropdown for searching is controlled by a site admin via an admin page, and is configured at a company level. Company A may have a Dropdown for Field1, Company B may have one for Field5, Field7, and Field18, and Company C may not have any Dropdowns what so ever.
While the layout and format of the DataRecords is consistent from company to company, the usage, and therefore the uniqueness of values, of Field1 - Field20 is not. Company A may have 3 unique values for Field1 across 900k records (hence why it makes sense to use a Dropdown for Field1 for them), while Company B may have something unique in Field1 for every DataRecord.
Everything database related is maintained via EF Migrations, and the site is set to auto apply migrations on App Startup (or on Deploy in the case of the Azure staging site). Anything that is recommended from a database perspective must be able to be implemented programmatically through migrations, so that the upgrading or instancing of the site and database may be done without manual intervention by someone with db access. Also, any database changes that need to be done should be not interfere with CodeFirst Migrations that are created when models are changed (IE cannot rename a column because some rogue index that was added outside of annotations exists)l
Similarly to the previous point, the Dropdown configuration is controlled via the site, so anything that needs to be done must be able to be added and removed on demand at runtime.
Relevant data changes that occur within usage of the site, but not necessarily by the current user:
FileStatus of a DataFile changes from 0 to 1 or 2
Which BusinessLines the current user can access changes
Additional BusinessLines are added
Relevant data changes that occur outside of the site (via importer app which is also part of the solution that the site is in and therefore can be modified if necessary):
New DataFiles and DataRecords are added
Additional BusinessLines are added (not a copy/paste error, they can be added through the importer as well)
Miscellaneous Info
The site is deployed to many locations, but in each deployment, the site to database is 1:1. So an in-memory caching is not out of the question.
There is only one Site Admin that controls which fields are represented as Dropdowns, and he can be educated about ramifications of making frequent changes and the caching each change may result in if necessary. He is also familiar with the data in each field at a Company level, and knows which fields are good candidates for Dropdowns.
Just to give a little data quantity context, in just over 2.5 months, the number of DataRecords for one company went from 558k to 924k. So obviously the solution should be able to work with an ever-growing amount of data.
Offloading the load time of loading of the values to an ajax request as to not hold up the page load is a good solution in general, but not one I can use for this.
Two quick items that jump out here would be
1) to add the Field2 column that is being returned, as an INCLUDE in the CLUSTERED INDEX on the DataRecords table. That will keep it from needing to do a bookmark lookup to find the Field2 after the ON clause has done the main work of finding the ID's.
2) Not sure why there is an double select happening. I don't think it would be a big impact, but the query is just reselecting what it selected as distinct, not even changing the name...

Generating script in SSMS 2016 RC for Azure database omits default values

I have migrated my local SQL Server database to Azure using the built-in migration tool in SSMS 2016 release candidate. Apart from a number of failed conversions of stored procedures which use features disallowed in Azure, it looks OK.
I have now generated scripts of the schema from both the local and the Azure versions of the database, using the same scripting options, so that I can compare the scripts of the databases and identify any differences or other missing items.
My problem is that the script generated from Azure does not include the default value constraints on columns. Looking at the table definitions directly in SSMS shows that the default values have correctly been set.
Can anyone help me to get the SSMS script generator to add the default value constraints into the generated script?
This is an example script from the local database:
SET ANSI_NULLS ON
CREATE TABLE [xOrgBusinessType](
[OrgID] [int] NOT NULL,
[BusTypeID] [int] NOT NULL,
[CRD] [datetime] NOT NULL CONSTRAINT [DF_xOrgBusinessType_CRD] DEFAULT (getutcdate()),
[CRDByID] [int] NOT NULL CONSTRAINT [DF_xOrgBusinessType_CRDByID] DEFAULT ((0)),
CONSTRAINT [PK_xOrgBusinessType] PRIMARY KEY CLUSTERED
(
[OrgID] ASC,
[BusTypeID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
)
and the equivalent script from the Azure database
SET ANSI_NULLS ON
CREATE TABLE [xOrgBusinessType](
[OrgID] [int] NOT NULL,
[BusTypeID] [int] NOT NULL,
[CRD] [datetime] NOT NULL,
[CRDByID] [int] NOT NULL,
CONSTRAINT [PK_xOrgBusinessType] PRIMARY KEY CLUSTERED
(
[OrgID] ASC,
[BusTypeID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)
I have found the answer to my own question. I made a mistake when comparing the scripts from the two databases. The local database script was generated by SSMS 2014, whereas the Azure database script was generated by SSMS 2016, and the two versions treat CONSTRAINT .. DEFAULT .. differently.
In the case of 2014, the constraint is scripted with the column definition. In the case of 2016, all the constraints are scripted in a later block, as ALTER TABLE .. ADD CONSTRAINT .. commands.
Although my question is answered, the SSMS 2016 approach seems somewhat bizarre. In the case of the script above, the CREATE TABLE script is around line 2500, while the ALTER TABLE script is at line 13100. Further, I had set the option to "Include descriptive headers", but the ALTER TABLE statements are not preceded by an object header, even though that part of the script defines a constraint object.
I wonder why the ALTER TABLE statements are so far removed from the table definitions. There seems to be no reason not to include them right after to the CREATE TABLE script.

Resources