Thursday, March 29, 2012

Data throughput

I am trying to understand what things affect the
throughput of data when queries are executed to SQL
Server. I'm trying to determine if there are
settings,pramiters or times that can improve the speed of
a recordset being returned to the application.
What I have noticed in our environment is that a
recordset of 1,000 rows is returned very quickly where as
a recordset of a 1,000,000,000 rows can take over an hour.
I understand that the bandwidth will have a large effect
on the time but there are times when the bandwidth is
wide open and it still takes a significant amount of
time. What things can I do/set to optimixe this as much
as possible.
What can you do? Don't return 1 billion row resultsets, quite simple. No
human is able to digest that amount of information, so you should filter or
summarize it at the server using WHERE, SUM etc, and then return a more
limited number of rows to the client.
Jacco Schalkwijk
SQL Server MVP
"Jim Abel" <jim.abel@.lmco> wrote in message
news:5c7301c42d7f$a97fd480$a101280a@.phx.gbl...
> I am trying to understand what things affect the
> throughput of data when queries are executed to SQL
> Server. I'm trying to determine if there are
> settings,pramiters or times that can improve the speed of
> a recordset being returned to the application.
> What I have noticed in our environment is that a
> recordset of 1,000 rows is returned very quickly where as
> a recordset of a 1,000,000,000 rows can take over an hour.
> I understand that the bandwidth will have a large effect
> on the time but there are times when the bandwidth is
> wide open and it still takes a significant amount of
> time. What things can I do/set to optimixe this as much
> as possible.
|||The query is used to fill a CSV file that a third party
application uses to generate charts driven by the
ultimate Users. The reason I'm asking the question is to
see if there are options to speed the flow of data from
the Database to the CSV file to shorten the time it
currently takes. I can't modify the 3rd parties software
and I can't query less data so the ODBC seems like the
place to look. Your answer is logical for different
circumstances but doesn't get to the original question,
ARE there items that can be tweaked in ODBC that can
increase throughput?

>--Original Message--
>What can you do? Don't return 1 billion row resultsets,
quite simple. No
>human is able to digest that amount of information, so
you should filter or
>summarize it at the server using WHERE, SUM etc, and
then return a more[vbcol=seagreen]
>limited number of rows to the client.
>--
>Jacco Schalkwijk
>SQL Server MVP
>
>"Jim Abel" <jim.abel@.lmco> wrote in message
>news:5c7301c42d7f$a97fd480$a101280a@.phx.gbl...
of[vbcol=seagreen]
as[vbcol=seagreen]
hour.[vbcol=seagreen]
effect[vbcol=seagreen]
much
>
>.
>
|||What do you use to generate the csv file? bcp will be the quickest.
I--
Jacco Schalkwijk
SQL Server MVP
<anonymous@.discussions.microsoft.com> wrote in message
news:70ca01c42fae$ac9de360$a101280a@.phx.gbl...[vbcol=seagreen]
> The query is used to fill a CSV file that a third party
> application uses to generate charts driven by the
> ultimate Users. The reason I'm asking the question is to
> see if there are options to speed the flow of data from
> the Database to the CSV file to shorten the time it
> currently takes. I can't modify the 3rd parties software
> and I can't query less data so the ODBC seems like the
> place to look. Your answer is logical for different
> circumstances but doesn't get to the original question,
> ARE there items that can be tweaked in ODBC that can
> increase throughput?
>
> quite simple. No
> you should filter or
> then return a more
> of
> as
> hour.
> effect
> much

Data throughput

I am trying to understand what things affect the
throughput of data when queries are executed to SQL
Server. I'm trying to determine if there are
settings,pramiters or times that can improve the speed of
a recordset being returned to the application.
What I have noticed in our environment is that a
recordset of 1,000 rows is returned very quickly where as
a recordset of a 1,000,000,000 rows can take over an hour.
I understand that the bandwidth will have a large effect
on the time but there are times when the bandwidth is
wide open and it still takes a significant amount of
time. What things can I do/set to optimixe this as much
as possible.What can you do? Don't return 1 billion row resultsets, quite simple. No
human is able to digest that amount of information, so you should filter or
summarize it at the server using WHERE, SUM etc, and then return a more
limited number of rows to the client.
Jacco Schalkwijk
SQL Server MVP
"Jim Abel" <jim.abel@.lmco> wrote in message
news:5c7301c42d7f$a97fd480$a101280a@.phx.gbl...
> I am trying to understand what things affect the
> throughput of data when queries are executed to SQL
> Server. I'm trying to determine if there are
> settings,pramiters or times that can improve the speed of
> a recordset being returned to the application.
> What I have noticed in our environment is that a
> recordset of 1,000 rows is returned very quickly where as
> a recordset of a 1,000,000,000 rows can take over an hour.
> I understand that the bandwidth will have a large effect
> on the time but there are times when the bandwidth is
> wide open and it still takes a significant amount of
> time. What things can I do/set to optimixe this as much
> as possible.|||The query is used to fill a CSV file that a third party
application uses to generate charts driven by the
ultimate Users. The reason I'm asking the question is to
see if there are options to speed the flow of data from
the Database to the CSV file to shorten the time it
currently takes. I can't modify the 3rd parties software
and I can't query less data so the ODBC seems like the
place to look. Your answer is logical for different
circumstances but doesn't get to the original question,
ARE there items that can be tweaked in ODBC that can
increase throughput?

>--Original Message--
>What can you do? Don't return 1 billion row resultsets,
quite simple. No
>human is able to digest that amount of information, so
you should filter or
>summarize it at the server using WHERE, SUM etc, and
then return a more
>limited number of rows to the client.
>--
>Jacco Schalkwijk
>SQL Server MVP
>
>"Jim Abel" <jim.abel@.lmco> wrote in message
>news:5c7301c42d7f$a97fd480$a101280a@.phx.gbl...
of[vbcol=seagreen]
as[vbcol=seagreen]
hour.[vbcol=seagreen]
effect[vbcol=seagreen]
much[vbcol=seagreen]
>
>.
>|||What do you use to generate the csv file? bcp will be the quickest.
I--
Jacco Schalkwijk
SQL Server MVP
<anonymous@.discussions.microsoft.com> wrote in message
news:70ca01c42fae$ac9de360$a101280a@.phx.gbl...[vbcol=seagreen]
> The query is used to fill a CSV file that a third party
> application uses to generate charts driven by the
> ultimate Users. The reason I'm asking the question is to
> see if there are options to speed the flow of data from
> the Database to the CSV file to shorten the time it
> currently takes. I can't modify the 3rd parties software
> and I can't query less data so the ODBC seems like the
> place to look. Your answer is logical for different
> circumstances but doesn't get to the original question,
> ARE there items that can be tweaked in ODBC that can
> increase throughput?
>
>
> quite simple. No
> you should filter or
> then return a more
> of
> as
> hour.
> effect
> much

Data that is absent

I have a single table which we use to keep track of data from different
physical machines. The columns are: TheDate, TheName, & TheNotes. The Date
column is datetime and the rest are char. Every day each machine we collect
data on gets a row of information: The date of the collection, the machine
name, & any notes on the machine.
Since everyday there is a new row for each machine, what I would like to do
is in Query Analyzer be able to tell if there are any dates missing. Then we
could manually check the machine and see why it is not reporting.
I read an article that said to use the EXISTS command to select data based
on the preence or absence of values. I tried a couple different code samples
,
none showed signs of progress. If I were coding in VB I would use an array
and compare the distinct values. Is there a way in SQL to show values absent
from a known sequence.
Thanks
vmI assume you have a table with one row for each machine you want to manage.
You can do something like this.
tableA = Machine Names
tableB = Holds daily rows
SELECT a.Name FROM tableA AS a
WHERE NOT EXISTS (SELECT * FROM tableB AS b WHERE a.Name = b.Name
AND b.TheDate BETWEEN @.FromDate AND @.ToDate)
Andrew J. Kelly SQL MVP
"vm" <vm@.discussions.microsoft.com> wrote in message
news:513AB937-20C7-4893-8F8A-3E0363404703@.microsoft.com...
>I have a single table which we use to keep track of data from different
> physical machines. The columns are: TheDate, TheName, & TheNotes. The Date
> column is datetime and the rest are char. Every day each machine we
> collect
> data on gets a row of information: The date of the collection, the machine
> name, & any notes on the machine.
> Since everyday there is a new row for each machine, what I would like to
> do
> is in Query Analyzer be able to tell if there are any dates missing. Then
> we
> could manually check the machine and see why it is not reporting.
> I read an article that said to use the EXISTS command to select data based
> on the preence or absence of values. I tried a couple different code
> samples,
> none showed signs of progress. If I were coding in VB I would use an array
> and compare the distinct values. Is there a way in SQL to show values
> absent
> from a known sequence.
> Thanks
> vm|||Actually all of the data is in a single table. Each row in the table contain
s
the date, machine name, and any notes for the machine in different columns.
The database is simple and probably inefficient because I am fairly new to
SQL and the data I need to store and access is fairly straight forward. The
main reason I went with SQL db over Access, Excel, or even text files is
because the number of machines I have to work with multiplied by the number
of days would have blown everything else away. The data is simple enough tha
t
I could have went with any of the other storage options except for the sheer
number of records.
Is there any way your code will work with a single table. What I have read
on EXISTS mentions multiple tables.
Thanks
vm
vm
"Andrew J. Kelly" wrote:

> I assume you have a table with one row for each machine you want to manage
.
> You can do something like this.
> tableA = Machine Names
> tableB = Holds daily rows
> SELECT a.Name FROM tableA AS a
> WHERE NOT EXISTS (SELECT * FROM tableB AS b WHERE a.Name = b.Name
> AND b.TheDate BETWEEN @.FromDate AND @.ToDate)
> --
> Andrew J. Kelly SQL MVP
>
> "vm" <vm@.discussions.microsoft.com> wrote in message
> news:513AB937-20C7-4893-8F8A-3E0363404703@.microsoft.com...
>
>|||"vm" <vm@.discussions.microsoft.com> wrote in message
news:513AB937-20C7-4893-8F8A-3E0363404703@.microsoft.com...
> I have a single table which we use to keep track of data from
different
> physical machines. The columns are: TheDate, TheName, & TheNotes.
The Date
> column is datetime and the rest are char. Every day each machine
we collect
> data on gets a row of information: The date of the collection, the
machine
> name, & any notes on the machine.
> Since everyday there is a new row for each machine, what I would
like to do
> is in Query Analyzer be able to tell if there are any dates
missing. Then we
> could manually check the machine and see why it is not reporting.
> I read an article that said to use the EXISTS command to select
data based
> on the preence or absence of values. I tried a couple different
code samples,
> none showed signs of progress. If I were coding in VB I would use
an array
> and compare the distinct values. Is there a way in SQL to show
values absent
> from a known sequence.
> Thanks
> vm
vm,
Basically, Andrew Kelly was right.
In order to have a known sequence of dates, you would build a
calendar table with your dates.
This way, you can run a NOT EXISTS or Frustrated Outer Join query
against the calendar table, and that will show you the missing
dates.
Sincerely,
Chris O.|||Excellent, that seems the simpelest solution. I am away from work for the
wend, but will try on Monday.
Thanks to both!
vm
"Chris2" wrote:

> "vm" <vm@.discussions.microsoft.com> wrote in message
> news:513AB937-20C7-4893-8F8A-3E0363404703@.microsoft.com...
> different
> The Date
> we collect
> machine
> like to do
> missing. Then we
> data based
> code samples,
> an array
> values absent
> vm,
> Basically, Andrew Kelly was right.
> In order to have a known sequence of dates, you would build a
> calendar table with your dates.
> This way, you can run a NOT EXISTS or Frustrated Outer Join query
> against the calendar table, and that will show you the missing
> dates.
>
> Sincerely,
> Chris O.
>
>

data that in one table column and not in the other table column

dear all

i have 2 tables, lets say table A and Table B

both tables has column ID

i wonder how can i find records that appears in B.ID and not appear in A.ID

what is the SQL command in this case?

Thnks alot

Please post T-SQL question in the Transact-SQL forum at:

http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=85&SiteID=1

You can use the following query:

SELECT B.ID from B where B.ID NOT IN

(SELECT A.ID FROM A)

And this will return records from B which are not in A

Data tarsfer from one table to another with different schema on sqlserver 2000

Hi all
I am working on SqlServer 2000,
When row will be inserted/updated in one table that data neds to be
inserted/updated into the another DataBase on the same machine. Both
table has different schema, what is the Better solution? I know we can
do it with Trigger, But Is there any better way to do it?
Thanks
ABCL
> do it with Trigger, But Is there any better way to do it?
How do you insert/update the data ? By using SP? If so insert/update into
another table within the same SP.
I hope you don't have permmisions problems when you update/insert the
data into differnt schem/user owner object
"ABCL" <ohmp05@.hotmail.com> wrote in message
news:64f2096c-1e83-4e6a-8cec-2f0f818284bd@.n77g2000hse.googlegroups.com...
> Hi all
> I am working on SqlServer 2000,
> When row will be inserted/updated in one table that data neds to be
> inserted/updated into the another DataBase on the same machine. Both
> table has different schema, what is the Better solution? I know we can
> do it with Trigger, But Is there any better way to do it?
> Thanks
|||On Mar 16, 8:01Xam, ABCL <ohm...@.hotmail.com> wrote:
> Hi all
> X I am working on SqlServer 2000,
> XWhen row will be inserted/updated Xin one table that data neds to be
> inserted/updated into the another DataBase on the same machine. Both
> table has different schema, what is the Better solution? I know we can
> do it with Trigger, But Is there any better way to do it?
> Thanks
Actually we want to SynchroniZe the Data from one DataBase to another
and vise versa on Insert/update and delete.
How Can we Perform Data Synchronization on both the DataBase with
different schema.
If I write trigger, I think There will be endless loop If both Tables
hava trigger on inster/update and Delete.
Do u have any solution for that? I am new to perform this kind of task
Is it possible with DTS( scheduled job)?
Is there any Performance Issue?
sql

Data tarsfer from one table to another with different schema on sql

Hi all
I am working on SqlServer 2000,
When row will be inserted/updated in one table that data neds to be
inserted/updated into the another DataBase on the same machine. Both
table has different schema, what is the Better solution? I know we can
do it with Trigger, But Is there any better way to do it?
ThanksABCL
> do it with Trigger, But Is there any better way to do it?
How do you insert/update the data ? By using SP? If so insert/update into
another table within the same SP.
I hope you don't have permmisions problems when you update/insert the
data into differnt schem/user owner object
"ABCL" <ohmp05@.hotmail.com> wrote in message
news:64f2096c-1e83-4e6a-8cec-2f0f818284bd@.n77g2000hse.googlegroups.com...
> Hi all
> I am working on SqlServer 2000,
> When row will be inserted/updated in one table that data neds to be
> inserted/updated into the another DataBase on the same machine. Both
> table has different schema, what is the Better solution? I know we can
> do it with Trigger, But Is there any better way to do it?
> Thanks|||On Mar 16, 8:01=A0am, ABCL <ohm...@.hotmail.com> wrote:
> Hi all
> =A0 I am working on SqlServer 2000,
> =A0When row will be inserted/updated =A0in one table that data neds to be
> inserted/updated into the another DataBase on the same machine. Both
> table has different schema, what is the Better solution? I know we can
> do it with Trigger, But Is there any better way to do it?
> Thanks
Actually we want to SynchroniZe the Data from one DataBase to another
and vise versa on Insert/update and delete.
How Can we Perform Data Synchronization on both the DataBase with
different schema.
If I write trigger, I think There will be endless loop If both Tables
hava trigger on inster/update and Delete.
Do u have any solution for that? I am new to perform this kind of task
Is it possible with DTS( scheduled job)?
Is there any Performance Issue?

data table fragmentation

we've got some large _data_ tables that could be fragmented (indexes
are already being defragged using dbcc indexdefrag). so,
1. how can i confirm this?
2. if they are, what's the best way to unfragment them?
i've been looking through old threads and see a lot of references to
dbcc shrinkfile, but that command doesn't seem to address the data
contiguity issue. the other option i've seen is to bcp the data out and
back in, but that seems a bit cumbersome.
thanks,
arthur
1)
DBCC SHOWCONTIG (For more details please refer to the BOL)
2)
DBCC INDEXDEFRAG
DBCC DBREINDEX
"arthur" <alangham@.gmail.com> wrote in message
news:1107784896.734177.96160@.l41g2000cwc.googlegro ups.com...
> we've got some large _data_ tables that could be fragmented (indexes
> are already being defragged using dbcc indexdefrag). so,
> 1. how can i confirm this?
> 2. if they are, what's the best way to unfragment them?
> i've been looking through old threads and see a lot of references to
> dbcc shrinkfile, but that command doesn't seem to address the data
> contiguity issue. the other option i've seen is to bcp the data out and
> back in, but that seems a bit cumbersome.
> thanks,
> arthur
>
|||No don't use shrinkfile as that will fragment them even more. You defrag
indexes with DBCC DBREINDEX or DBCC INDEXDEFRAG. The table itself should
have a clustered index on it so if you rebuild the clustered index you
rebuild the table itself. Have a look here:
http://www.microsoft.com/technet/pro.../ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"arthur" <alangham@.gmail.com> wrote in message
news:1107784896.734177.96160@.l41g2000cwc.googlegro ups.com...
> we've got some large _data_ tables that could be fragmented (indexes
> are already being defragged using dbcc indexdefrag). so,
> 1. how can i confirm this?
> 2. if they are, what's the best way to unfragment them?
> i've been looking through old threads and see a lot of references to
> dbcc shrinkfile, but that command doesn't seem to address the data
> contiguity issue. the other option i've seen is to bcp the data out and
> back in, but that seems a bit cumbersome.
> thanks,
> arthur
>
|||use dbcc showcontig
check books on line for lots of examples and explanation of what the output
means.
fix via indexdefrag or dbReindex.
keep in mind that if a table does not have a clustered index, you will not
be able to defrag it.
cheers,
Greg Jackson
Portland, Oregon
|||as always, thanks!
arthur

data table fragmentation

we've got some large _data_ tables that could be fragmented (indexes
are already being defragged using dbcc indexdefrag). so,
1. how can i confirm this?
2. if they are, what's the best way to unfragment them?
i've been looking through old threads and see a lot of references to
dbcc shrinkfile, but that command doesn't seem to address the data
contiguity issue. the other option i've seen is to bcp the data out and
back in, but that seems a bit cumbersome.
thanks,
arthur1)
DBCC SHOWCONTIG (For more details please refer to the BOL)
2)
DBCC INDEXDEFRAG
DBCC DBREINDEX
"arthur" <alangham@.gmail.com> wrote in message
news:1107784896.734177.96160@.l41g2000cwc.googlegroups.com...
> we've got some large _data_ tables that could be fragmented (indexes
> are already being defragged using dbcc indexdefrag). so,
> 1. how can i confirm this?
> 2. if they are, what's the best way to unfragment them?
> i've been looking through old threads and see a lot of references to
> dbcc shrinkfile, but that command doesn't seem to address the data
> contiguity issue. the other option i've seen is to bcp the data out and
> back in, but that seems a bit cumbersome.
> thanks,
> arthur
>|||No don't use shrinkfile as that will fragment them even more. You defrag
indexes with DBCC DBREINDEX or DBCC INDEXDEFRAG. The table itself should
have a clustered index on it so if you rebuild the clustered index you
rebuild the table itself. Have a look here:
http://www.microsoft.com/technet/pr...n/ss2kidbp.mspx
Andrew J. Kelly SQL MVP
"arthur" <alangham@.gmail.com> wrote in message
news:1107784896.734177.96160@.l41g2000cwc.googlegroups.com...
> we've got some large _data_ tables that could be fragmented (indexes
> are already being defragged using dbcc indexdefrag). so,
> 1. how can i confirm this?
> 2. if they are, what's the best way to unfragment them?
> i've been looking through old threads and see a lot of references to
> dbcc shrinkfile, but that command doesn't seem to address the data
> contiguity issue. the other option i've seen is to bcp the data out and
> back in, but that seems a bit cumbersome.
> thanks,
> arthur
>|||use dbcc showcontig
check books on line for lots of examples and explanation of what the output
means.
fix via indexdefrag or dbReindex.
keep in mind that if a table does not have a clustered index, you will not
be able to defrag it.
cheers,
Greg Jackson
Portland, Oregon|||as always, thanks!
arthur

data table fragmentation

we've got some large _data_ tables that could be fragmented (indexes
are already being defragged using dbcc indexdefrag). so,
1. how can i confirm this?
2. if they are, what's the best way to unfragment them?
i've been looking through old threads and see a lot of references to
dbcc shrinkfile, but that command doesn't seem to address the data
contiguity issue. the other option i've seen is to bcp the data out and
back in, but that seems a bit cumbersome.
thanks,
arthur1)
DBCC SHOWCONTIG (For more details please refer to the BOL)
2)
DBCC INDEXDEFRAG
DBCC DBREINDEX
"arthur" <alangham@.gmail.com> wrote in message
news:1107784896.734177.96160@.l41g2000cwc.googlegroups.com...
> we've got some large _data_ tables that could be fragmented (indexes
> are already being defragged using dbcc indexdefrag). so,
> 1. how can i confirm this?
> 2. if they are, what's the best way to unfragment them?
> i've been looking through old threads and see a lot of references to
> dbcc shrinkfile, but that command doesn't seem to address the data
> contiguity issue. the other option i've seen is to bcp the data out and
> back in, but that seems a bit cumbersome.
> thanks,
> arthur
>|||No don't use shrinkfile as that will fragment them even more. You defrag
indexes with DBCC DBREINDEX or DBCC INDEXDEFRAG. The table itself should
have a clustered index on it so if you rebuild the clustered index you
rebuild the table itself. Have a look here:
http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx
--
Andrew J. Kelly SQL MVP
"arthur" <alangham@.gmail.com> wrote in message
news:1107784896.734177.96160@.l41g2000cwc.googlegroups.com...
> we've got some large _data_ tables that could be fragmented (indexes
> are already being defragged using dbcc indexdefrag). so,
> 1. how can i confirm this?
> 2. if they are, what's the best way to unfragment them?
> i've been looking through old threads and see a lot of references to
> dbcc shrinkfile, but that command doesn't seem to address the data
> contiguity issue. the other option i've seen is to bcp the data out and
> back in, but that seems a bit cumbersome.
> thanks,
> arthur
>|||use dbcc showcontig
check books on line for lots of examples and explanation of what the output
means.
fix via indexdefrag or dbReindex.
keep in mind that if a table does not have a clustered index, you will not
be able to defrag it.
cheers,
Greg Jackson
Portland, Oregon|||as always, thanks!
arthur

data table foriegn key problem

I am just learning how to program in SQL, so please be patient with me

ok, here it is, i am getting the following errors with my script when i try to execute, and although i realize that this is not the best style to write my script in, i am trying to learn why this is erroring. the errors are:

Msg 1769, Level 16, State 1, Line 9

Foreign key 'Employee2Job_Title' references invalid column 'Title' in referencing table 'Employee'.

Msg 1750, Level 16, State 0, Line 9

Could not create constraint. See previous errors.

now i realize that the second error is because of the first, and that if i can fix the first, the second will go away. Thanks for the help.

Here is my script

use inventory

go

CREATE TABLE Job_Title

(Job_Title_Title char(25) NOT NULL ,

Job_Title_EEO1_Classification char(25) ,

Job_Title_Job_Description Varchar(45) ,

Job_Title_Exempt_Status Varchar(15)

, PRIMARY KEY (Job_Title_Title)

);

CREATE TABLE Employee

(Employee_Emp_ID integer NOT NULL ,

Employee_last_name varchar(15) ,

Employee_first_name varchar(15) ,

Employee_address varchar(30) ,

Employee_city varchar(15) ,

Employee_state char(2) ,

Employee_Telephone_area_code char(3) ,

Employee_Telephone_number char(8) ,

Employee_EEO1_Classification char(25) ,

Employee_Hire_Date char(8) ,

Employee_Salary char(6) ,

Employee_Gender Varchar(1) ,

Employee_Age char(2) ,

Employee_Title char(25)

, PRIMARY KEY (Employee_Emp_ID)

, constraint Employee2Job_Title FOREIGN KEY (Title

) REFERENCES Job_Title

);

Insert INTO Employee (Emp_ID, Last_name, First_name, Address, City, State, Telephone_area_code, Telephone_number, EEO1_Classification, Hire_date, Salary, Gender, Age, Title) Values

(1, 'Edelman', 'Glenn', '175 Bishop Lane', 'La Jolla', 'CA', 619, '555-0199', 'Sales Workers', 10/7/2003, 21500.00, 'M', 64, 'Cashier')

Insert INTO Employee (Emp_ID, Last_name, First_name, Address, City, State, Telephone_area_code, Telephone_number, EEO1_Classification, Hire_date, Salary, Gender, Age, Title) Values

(2, 'McMullen', 'Eric', '762 Church Street', 'Lemon Grove', 'CA', 619, '555-0133', 'Sales Workers', 11/1/2002, 13500.00, 'M', 20, 'Bagger')

Insert INTO Employee (Emp_ID, Last_name, First_name, Address, City, State, Telephone_area_code, Telephone_number, EEO1_Classification, Hire_date, Salary, Gender, Age, Title) Values

(3, 'Slenj', 'Raj', '123 Torrey Drive', 'North Clairmont', 'CA', 619, '555-0123', 'Officials & Managers', 6/1/2000, 48000.00, 'M', 34, 'Assistant Manager')

Insert INTO Employee (Emp_ID, Last_name, First_name, Address, City, State, Telephone_area_code, Telephone_number, EEO1_Classification, Hire_date, Salary, Gender, Age, Title) Values

(4, 'Broun', 'Erin', '2045 Parkway Apt 2b', 'Encinitas', 'CA', 760, '555-0100', 'Sales Workers', 3/12/2003, 10530.00, 'F', 24, 'Bagger - 30 hours/wk')

Insert INTO Employee (Emp_ID, Last_name, First_name, Address, City, State, Telephone_area_code, Telephone_number, EEO1_Classification, Hire_date, Salary, Gender, Age, Title) Values

(5, 'Carpenter', 'Donald', '927 Second Street', 'Encinitas', 'CA', 619, '555-0154', 'Office/Clerical', 11/1/2003, 15000.00, 'M', 18, 'Stocker')

Insert INTO Employee (Emp_ID, Last_name, First_name, Address, City, State, Telephone_area_code, Telephone_number, EEO1_Classification, Hire_date, Salary, Gender, Age, Title) Values

(6, 'Esquivez', 'David', '10983 North Coast Highway Apt 902', 'Encinitas', 'CA', 760, '555-0108', 'Operatives (Semi skilled)', 7/25/2003, 18500.00, 'M',25, 'Asst. - Butchers & Seafood Specialists')

Insert INTO Employee (Emp_ID, Last_name, First_name, Address, City, State, Telephone_area_code, Telephone_number, EEO1_Classification, Hire_date, Salary, Gender, Age, Title) Values

(7, 'Sharp', 'Nancy', '10793 Montecino Road', 'Ramona', 'CA', 858, '555-0135', 'Sales Workers', 7/12/2003, 21000.00, 'F', 24, 'Cashier')

Insert into Job_Title (Title, EEO1_Classification, Job_Description, Exempt_Status) Values

('Accounting Clerk', 'Office/Clerical', 'Computes, Classifies, records, and verifies numerical data for use in maintaining accounting records.', 'Non-Exempt')

Insert into Job_Title (Title, EEO1_Classification, Job_Description, Exempt_Status) Values

('Assistant store manager', 'Officials & Mangers', 'Supervises and coordintes activities of workers in department of food store. Assist store manager in daily operations of store.', 'Exempt')

Insert into Job_Title (Title, EEO1_Classification, Job_Description, Exempt_Status) Values

('Bagger', 'Sales Worker', 'Places customer orders in bags. Performs carry out duties for customers.', 'Non-Exempt')

Insert into Job_Title (Title, EEO1_Classification, Job_Description, Exempt_Status) Values

('Cashier', 'Sales Worker', 'Operates Cash register to itemize and total customers purchases in grocercy store', 'Non-Exempt')

Insert into Job_Title (Title, EEO1_Classification, Job_Description, Exempt_Status) Values

('Computer Support Specialist', 'Technician', 'Installs, Modifies, and makes minor repairs to personal computer hardware and software systems and provides technical assistance and training to system users.', 'Non-Exempt')

Insert into Job_Title (Title, EEO1_Classification, Job_Description, Exempt_Status) Values

('Director of Finance & Accounting', 'Officials & Mangers', 'Plans and directs finance and accounting activites for Kudlser Fine Foods.', 'Non-Exempt')

Insert into Job_Title (Title, EEO1_Classification, Job_Description, Exempt_Status) Values

('Retail Asst. Bakery & Pastry', 'Craft Workers (Skilled)', 'Obtains or prepares Bakery and Pastry items requested by customers in retail food store.', 'Non-Exempt')

Insert into Job_Title (Title, EEO1_Classification, Job_Description, Exempt_Status) Values

('Retail Asst. Butchers and Seafood Specialists', 'Operatives (Semi skilled)', 'Obtains or prepares Meat and Seafood items requested by customers in retail food store.', 'Non-Exempt')

Insert into Job_Title (Title, EEO1_Classification, Job_Description, Exempt_Status) Values

('Stocker', 'Office/Clerical', 'Stores, prices, and restocks merchandise displays in store.', 'Non-Exempt')

In your foreign key that you are creating with:

constraint Employee2Job_Title FOREIGN KEY (Title) REFERENCES Job_Title

The column named Title needs to exist in the Employee table on which you are creating the constraint. Then for the column it points to with the REFERENCES part, the needs to be in the format of ReferenceTable(ReferenceTableColumnName)

If I'm guessing right at your tables, I think what you are looking for is more along the lines of

constraint Employee2Job_Title FOREIGN KEY (Employee_Title) REFERENCES Job_Title(Job_Title_Title)

-Sue

|||ok, here is the code i ended up with that cured that problem

CREATE TABLE Job_Title
(Job_Title_Title char(25) NOT NULL ,
Job_Title_EEO1_Classification char(25) ,
Job_Title_Job_Description Varchar(45) ,
Job_Title_Exempt_Status Varchar(15)
, PRIMARY KEY (Job_Title_Title)
);

CREATE TABLE Employee
(Employee_Emp_ID integer NOT NULL ,
Employee_last_name varchar(15) ,
Employee_first_name varchar(15) ,
Employee_address varchar(30) ,
Employee_city varchar(15) ,
Employee_state char(2) ,
Employee_Telephone_area_code char(3) ,
Employee_Telephone_number char(8) ,
Employee_EEO1_Classification char(25) ,
Employee_Hire_Date char(8) ,
Employee_Salary char(6) ,
Employee_Gender Varchar(1) ,
Employee_Age char(2) ,
Employee_Title char(25)
, PRIMARY KEY (Employee_Emp_ID)
, constraint Employee2Job_Title FOREIGN KEY (Employee_Title
) REFERENCES Job_Title
);

but now i am on to new and even more frustrating errors, and i will do some work on them myself, on the morrow, then if i am still having problems, i will again avail myself of this resource.

thanks much sue, your assistance has been invaluable!sql

Data tab works great but i can't get it work in Preview tab in reporting services

This code works great in data tab but when i go to preview it i get a len function error.
I am passing possible BLANK values in all the strings not NULL. Do you think its an parameter expression issue? If so, how do I write a expression for each parameter that will reflect this logic. NOTE: As you can see states and program are multi param, and I don't know ahead of time what values will be entered. So if some could please help.
SELECT DISTINCT A.RECIPIENT_STATE, B.FISCAL_YEAR, B.FEDERAL_SHARE, B.NON_FEDERAL_SHARE
FROM Tab_Awards AS A JOIN TAB_Amendments AS B ON A.AWARD_NUMBER = B.AWARD_NUMBER JOIN Tab_Program AS C
ON A.FPO_ID = C.FPO_ID
WHERE (LEN(@.keywords)<1 OR (FREETEXT(A.*,@.keywords)))
AND (LEN(@.states)<1 OR A.RECIPIENT_STATE IN (@.states))
AND (LEN(@.program) <1 OR C.PROGRAM IN (@.program))
AND (LEN(@.fromYear)<1 OR B.FISCAL_YEAR >= @.fromYear)
AND (LEN(@.toYear)<1 OR B.FISCAL_YEAR <= @.toYear)
AND (LEN(@.year)<1 OR LEN(@.year)>=1)
AND (LEN(@.organization)<1 OR LEN(@.organization)>=1
From http://www.developmentnow.com/g/115_0_0_0_0_0/sql-server-reporting-services.ht
Posted via DevelopmentNow.com Group
http://www.developmentnow.comWhat exactly is the error ? Why I am asking is that len must be evaluating
some numeric value e.g year parameter must be having numeric and Len will
take string expression only. So Just have a look at all the parameters again
and see their data type.
Amarnath
"Dre" wrote:
> This code works great in data tab but when i go to preview it i get a len function error.
> I am passing possible BLANK values in all the strings not NULL. Do you think its an parameter expression issue? If so, how do I write a expression for each parameter that will reflect this logic. NOTE: As you can see states and program are multi param, and I don't know ahead of time what values will be entered. So if some could please help.
> SELECT DISTINCT A.RECIPIENT_STATE, B.FISCAL_YEAR, B.FEDERAL_SHARE, B.NON_FEDERAL_SHARE
> FROM Tab_Awards AS A JOIN TAB_Amendments AS B ON A.AWARD_NUMBER = B.AWARD_NUMBER JOIN Tab_Program AS C
> ON A.FPO_ID = C.FPO_ID
> WHERE (LEN(@.keywords)<1 OR (FREETEXT(A.*,@.keywords)))
> AND (LEN(@.states)<1 OR A.RECIPIENT_STATE IN (@.states))
> AND (LEN(@.program) <1 OR C.PROGRAM IN (@.program))
> AND (LEN(@.fromYear)<1 OR B.FISCAL_YEAR >= @.fromYear)
> AND (LEN(@.toYear)<1 OR B.FISCAL_YEAR <= @.toYear)
> AND (LEN(@.year)<1 OR LEN(@.year)>=1)
> AND (LEN(@.organization)<1 OR LEN(@.organization)>=1)
> From http://www.developmentnow.com/g/115_0_0_0_0_0/sql-server-reporting-services.htm
> Posted via DevelopmentNow.com Groups
> http://www.developmentnow.com
>

data synchronization accross multiple servers running replication

Hello,

We have an environment with 7 servers that are running replication with one another and I'm wondering if there are any tools or experiences that any of you might have that may assist in the auditing of these servers. The data should be in synch accross the boards for all tables, but sometimes problems can arise such as replication not being set up properly, stored procedure's being out of synch, or data gliches etc.

In dealing with these issues we have an in-house written program which analises each table on each server and takes a snapshot and does column by column compare. We also have another program that will synch the data up (basically a delete/insert statement on the publisher). This process can take up to 3 weeks for our quarterly update of every table. I'm wondering if anybody has used any tools such as in SSIS or a third party tool and has done or is doing something similar to what we are doing now. If so, are there any tips you wouldn't mind sharing on how our process might be sped up?

Thanks,

Phil

In SQL Server 2005 there is a tablediff tool which you can use to compare the tables. You can also use the validation options built in replication to verify if the data are in sync.

Here are the validation options:

In transactional replication: sp_publication_validation

In merge replication: use the -validate parameter for merge agent

|||

Li,

Thanks for the reply. I believe we can use this. Do you know how well it does as far as speed is concerned? I think that's our main concern right now. Also, I'm reading a bit on the uitlity and I'm wondering how does it take a "snapshot" of a table on multiple servers and compare that. Also you wouldn't happen to have any syntax examples would you?

Thanks,

Phil

|||

Also, does anyone know if there is a way to get the utility to use windows authentication instead of coding the sa password in the utility statement?

Thanks,

Phil

data synchronization

I have one Informix in OpenVMS machine and another MS SQL in Windows machine. I cannot touch the Informix server at all.

How to do the extract/update the data periodically from Informix to SQL? I was thinking of writing a service. But I don't know how to do the incremental update.

I was thinking of doing a pull subscription from MS SQL. Is it feasible? Is it a replication or integration service? I dunno. Please help.

Thank you.You don't need to write a service. There already is a service. Use DTS.

I do a nightly tranfer of data from Informix to SQL Server for one of my steady clients.

data synchronization

Hello! anybody know some tool to synchronizate data between databases (two
databases has the same structure) . I can not use replicate becouse I have
not a permanet connection between servers. Now, somebody can send me a
backup peridocally using email. And I can restore in a second database in
local and work in local with principal and second database, then send back
the backup sinchronized.
Thank you in advance and sorry for my english.
Guillermo
You might be able to use tools like RedGate software's data compare -
however they too will require a network connection.
You can replicate using merge replication or transactional replication to a
local database and then hack the msmerge_contents or msmerge_tombstone
tables for merge replication, or use sp_browsereplcmds for transactional
replication to get a list of the changes and then apply these on either
side.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Guillermo Villanueva" <guillermovil@.nospam.com> wrote in message
news:OTu6RLM4GHA.5000@.TK2MSFTNGP02.phx.gbl...
> Hello! anybody know some tool to synchronizate data between databases (two
> databases has the same structure) . I can not use replicate becouse I have
> not a permanet connection between servers. Now, somebody can send me a
> backup peridocally using email. And I can restore in a second database in
> local and work in local with principal and second database, then send back
> the backup sinchronized.
> Thank you in advance and sorry for my english.
> Guillermo
>
|||"Hilary Cotter" wrote:

> You might be able to use tools like RedGate software's data compare -
> however they too will require a network connection.
> You can replicate using merge replication or transactional replication to a
> local database and then hack the msmerge_contents or msmerge_tombstone
> tables for merge replication, or use sp_browsereplcmds for transactional
> replication to get a list of the changes and then apply these on either
> side.
> --
> Hilary Cotter
> Director of Text Mining and Database Strategy
> RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
> This posting is my own and doesn't necessarily represent RelevantNoise's
> positions, strategies or opinions.
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "Guillermo Villanueva" <guillermovil@.nospam.com> wrote in message
> news:OTu6RLM4GHA.5000@.TK2MSFTNGP02.phx.gbl...
>
>

data synchronization

Dear All,
I have 3 databases in sql , one db have a master data and I want to
synchronize to other 2 databases. is any sync. sw is available for that. or
is any thing in SQL also. databases may be in same place or remortly also
please help
thanking you
Regards
K R LalYou could write a script to keep the databases updated and run the script at
specific intervals. You may also consider replication. Take a look at
Replication in Books Online.
"Lal" <kr.lal@.luxoroffice.com> wrote in message
news:O8kSCgaCFHA.904@.TK2MSFTNGP12.phx.gbl...
> Dear All,
> I have 3 databases in sql , one db have a master data and I want to
> synchronize to other 2 databases. is any sync. sw is available for that.
> or
> is any thing in SQL also. databases may be in same place or remortly also
> please help
> thanking you
> Regards
>
> K R Lal
>|||Lal
The question is how often do you want to get a fresh data from the master?
Replication, especially transaction replication could suite you.
Also . I'd look at BACKUP/RESTORE commands altogether with BACKUP LOG file
as well.
"Lal" <kr.lal@.luxoroffice.com> wrote in message
news:O8kSCgaCFHA.904@.TK2MSFTNGP12.phx.gbl...
> Dear All,
> I have 3 databases in sql , one db have a master data and I want to
> synchronize to other 2 databases. is any sync. sw is available for that.
or
> is any thing in SQL also. databases may be in same place or remortly also
> please help
> thanking you
> Regards
>
> K R Lal
>|||Hello Lal.
Replication would be the defacto response although it may not suit your
environment or needs. You could also check out DB Ghost which can synchroniz
e
both data or schema or both which can be scheduled to run when you like. I
created a SQL Server job to synchronize my database with the hosting company
so I could have a reporting and disaster recovery database in case anything
went wrong with my hosting company. You can download the example here:
http://www.innovartis.co.uk/downloa...J
ob.zip
regards,
Mark Baekdal
http://www.dbghost.com
http://www.innovartis.co.uk
+44 (0)208 241 1762
Database change management for SQL Server
"Lal" wrote:

> Dear All,
> I have 3 databases in sql , one db have a master data and I want to
> synchronize to other 2 databases. is any sync. sw is available for that. o
r
> is any thing in SQL also. databases may be in same place or remortly also
> please help
> thanking you
> Regards
>
> K R Lal
>
>sql

data synchronization

Hello! anybody know some tool to synchronizate data between databases (two
databases has the same structure) . I can not use replicate becouse I have
not a permanet connection between servers. Now, somebody can send me a
backup peridocally using email. And I can restore in a second database in
local and work in local with principal and second database, then send back
the backup sinchronized.
Thank you in advance and sorry for my english.
Guillermo
You might be able to use tools like RedGate software's data compare -
however they too will require a network connection.
You can replicate using merge replication or transactional replication to a
local database and then hack the msmerge_contents or msmerge_tombstone
tables for merge replication, or use sp_browsereplcmds for transactional
replication to get a list of the changes and then apply these on either
side.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Guillermo Villanueva" <guillermovil@.nospam.com> wrote in message
news:OTu6RLM4GHA.5000@.TK2MSFTNGP02.phx.gbl...
> Hello! anybody know some tool to synchronizate data between databases (two
> databases has the same structure) . I can not use replicate becouse I have
> not a permanet connection between servers. Now, somebody can send me a
> backup peridocally using email. And I can restore in a second database in
> local and work in local with principal and second database, then send back
> the backup sinchronized.
> Thank you in advance and sorry for my english.
> Guillermo
>
|||"Hilary Cotter" wrote:

> You might be able to use tools like RedGate software's data compare -
> however they too will require a network connection.
> You can replicate using merge replication or transactional replication to a
> local database and then hack the msmerge_contents or msmerge_tombstone
> tables for merge replication, or use sp_browsereplcmds for transactional
> replication to get a list of the changes and then apply these on either
> side.
> --
> Hilary Cotter
> Director of Text Mining and Database Strategy
> RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
> This posting is my own and doesn't necessarily represent RelevantNoise's
> positions, strategies or opinions.
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "Guillermo Villanueva" <guillermovil@.nospam.com> wrote in message
> news:OTu6RLM4GHA.5000@.TK2MSFTNGP02.phx.gbl...
>
>

Data Synchronisation Issue

I am not quite a newbie in the area of SQL server - more "knows just
enough to be dangerous"
- so please be gentle with me.
Cross posted to several groups - apologies if too far off topic
I have an application that does a lot of massaging of data (insurance
claims) from 3 different data sources to present one set of nice
homogeneous output tables. Uses about 200 various tables and about 300
queries to do the work ( sounds like a mess - but trust me - it is
quite disciplined)
We are using 4 * 1Gb mdb backends and the client agrees it is time to
move
the backend to SQL server. They have made it clear that they do not
wish to convert any of the queries to pass thru queries - ie - I am
just replacing my attachments to mdb tables with attachments to SQL
tables. They accept that it will probably run even slower due to the
extra SQL overheads.
I have dealt with most issues in the conversion but the 3 show
stoppers are:
1) Half way through the process I get a "record is deleted" message
when one of the queries attempts to run. I am guessing that there is a
synchronisation problem between any earlier query that empties a table
and an "append" query that refills it, and maybe a subsequent select
query that uses the refilled table.
Can anyone give me some code snippets in access to force a query to
flush all its results to SQL before I embark on the next query.
2) Initially I used the data transformation services to load all the
access backend tables across to SQL server.
I then retweaked my homebrew attachment routines to handle attaching
to an SQL table - everything worked fine.
Of course the tables were not updateable due to a lack of primary keys
in the SQL tables.
No problem - I worked my way through the SQL tables building
constraints and / or primary keys.
Then I found that my attachment routines would fail for some of the
tables - message being (paraphrasing) - "I can't find that table or
the table name is too long"
If I went back in to SQL server and shortened the table names down to
about 20 characters - then the problem went away.
I even adjusted my Access attachment routines so that I could still
keep the desired attached table names.
The problem is more for the client - when they go to point Cognos at
the SQL tables - they will need to do some reworking (or maybe Cognos
has an alias facility)
Can anyone shed any light on this situation.
3) When I set up DSN's on my (Win 98) machine - they appear to store
the user password quite happily.
On the client's (XP) machine - when my attachment routine runs - it
appears that the DSN is not holding the password anymore - and we get
prompted for the password, for every table that is being attached.
Can anyone explain why ?
Many thanks in advance
Tony
See inline
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
<ace join_to ware@.iinet.net.au (Tony Epton)> wrote in message
news:41abdc2f.3023390@.news.m.iinet.net.au...
> I am not quite a newbie in the area of SQL server - more "knows just
> enough to be dangerous"
> - so please be gentle with me.
> Cross posted to several groups - apologies if too far off topic
> I have an application that does a lot of massaging of data (insurance
> claims) from 3 different data sources to present one set of nice
> homogeneous output tables. Uses about 200 various tables and about 300
> queries to do the work ( sounds like a mess - but trust me - it is
> quite disciplined)
> We are using 4 * 1Gb mdb backends and the client agrees it is time to
> move
> the backend to SQL server. They have made it clear that they do not
> wish to convert any of the queries to pass thru queries - ie - I am
> just replacing my attachments to mdb tables with attachments to SQL
> tables. They accept that it will probably run even slower due to the
> extra SQL overheads.
> I have dealt with most issues in the conversion but the 3 show
> stoppers are:
> 1) Half way through the process I get a "record is deleted" message
> when one of the queries attempts to run. I am guessing that there is a
> synchronisation problem between any earlier query that empties a table
> and an "append" query that refills it, and maybe a subsequent select
> query that uses the refilled table.
> Can anyone give me some code snippets in access to force a query to
> flush all its results to SQL before I embark on the next query.
>
If you are running a single process, then each query is complete before the
next one begins.

> 2) Initially I used the data transformation services to load all the
> access backend tables across to SQL server.
> I then retweaked my homebrew attachment routines to handle attaching
> to an SQL table - everything worked fine.
> Of course the tables were not updateable due to a lack of primary keys
> in the SQL tables.
> No problem - I worked my way through the SQL tables building
> constraints and / or primary keys.
> Then I found that my attachment routines would fail for some of the
> tables - message being (paraphrasing) - "I can't find that table or
> the table name is too long"
> If I went back in to SQL server and shortened the table names down to
> about 20 characters - then the problem went away.
> I even adjusted my Access attachment routines so that I could still
> keep the desired attached table names.
> The problem is more for the client - when they go to point Cognos at
> the SQL tables - they will need to do some reworking (or maybe Cognos
> has an alias facility)
> Can anyone shed any light on this situation.
SQL table names can be up to 128 characters long... I suspect you are seeing
some Access restriction, not a sql restriction.

> 3) When I set up DSN's on my (Win 98) machine - they appear to store
> the user password quite happily.
> On the client's (XP) machine - when my attachment routine runs - it
> appears that the DSN is not holding the password anymore - and we get
> prompted for the password, for every table that is being attached.
> Can anyone explain why ?
XP DSNs hold login/password if you choose NOT to use trusted
authentication... The create DSN walks you through this... Also, make sure
you are using a system DSN, so it will be valid for any user on the
machine...
> Many thanks in advance
> Tony

Data Synchronisation Issue

I am not quite a newbie in the area of SQL server - more "knows just
enough to be dangerous"
- so please be gentle with me.
Cross posted to several groups - apologies if too far off topic
I have an application that does a lot of massaging of data (insurance
claims) from 3 different data sources to present one set of nice
homogeneous output tables. Uses about 200 various tables and about 300
queries to do the work ( sounds like a mess - but trust me - it is
quite disciplined)
We are using 4 * 1Gb mdb backends and the client agrees it is time to
move
the backend to SQL server. They have made it clear that they do not
wish to convert any of the queries to pass thru queries - ie - I am
just replacing my attachments to mdb tables with attachments to SQL
tables. They accept that it will probably run even slower due to the
extra SQL overheads.
I have dealt with most issues in the conversion but the 3 show
stoppers are:
1) Half way through the process I get a "record is deleted" message
when one of the queries attempts to run. I am guessing that there is a
synchronisation problem between any earlier query that empties a table
and an "append" query that refills it, and maybe a subsequent select
query that uses the refilled table.
Can anyone give me some code snippets in access to force a query to
flush all its results to SQL before I embark on the next query.
2) Initially I used the data transformation services to load all the
access backend tables across to SQL server.
I then retweaked my homebrew attachment routines to handle attaching
to an SQL table - everything worked fine.
Of course the tables were not updateable due to a lack of primary keys
in the SQL tables.
No problem - I worked my way through the SQL tables building
constraints and / or primary keys.
Then I found that my attachment routines would fail for some of the
tables - message being (paraphrasing) - "I can't find that table or
the table name is too long"
If I went back in to SQL server and shortened the table names down to
about 20 characters - then the problem went away.
I even adjusted my Access attachment routines so that I could still
keep the desired attached table names.
The problem is more for the client - when they go to point Cognos at
the SQL tables - they will need to do some reworking (or maybe Cognos
has an alias facility)
Can anyone shed any light on this situation.
3) When I set up DSN's on my (Win 98) machine - they appear to store
the user password quite happily.
On the client's (XP) machine - when my attachment routine runs - it
appears that the DSN is not holding the password anymore - and we get
prompted for the password, for every table that is being attached.
Can anyone explain why ?
Many thanks in advance
TonySee inline
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
<ace join_to ware@.iinet.net.au (Tony Epton)> wrote in message
news:41abdc2f.3023390@.news.m.iinet.net.au...
> I am not quite a newbie in the area of SQL server - more "knows just
> enough to be dangerous"
> - so please be gentle with me.
> Cross posted to several groups - apologies if too far off topic
> I have an application that does a lot of massaging of data (insurance
> claims) from 3 different data sources to present one set of nice
> homogeneous output tables. Uses about 200 various tables and about 300
> queries to do the work ( sounds like a mess - but trust me - it is
> quite disciplined)
> We are using 4 * 1Gb mdb backends and the client agrees it is time to
> move
> the backend to SQL server. They have made it clear that they do not
> wish to convert any of the queries to pass thru queries - ie - I am
> just replacing my attachments to mdb tables with attachments to SQL
> tables. They accept that it will probably run even slower due to the
> extra SQL overheads.
> I have dealt with most issues in the conversion but the 3 show
> stoppers are:
> 1) Half way through the process I get a "record is deleted" message
> when one of the queries attempts to run. I am guessing that there is a
> synchronisation problem between any earlier query that empties a table
> and an "append" query that refills it, and maybe a subsequent select
> query that uses the refilled table.
> Can anyone give me some code snippets in access to force a query to
> flush all its results to SQL before I embark on the next query.
>
If you are running a single process, then each query is complete before the
next one begins.
> 2) Initially I used the data transformation services to load all the
> access backend tables across to SQL server.
> I then retweaked my homebrew attachment routines to handle attaching
> to an SQL table - everything worked fine.
> Of course the tables were not updateable due to a lack of primary keys
> in the SQL tables.
> No problem - I worked my way through the SQL tables building
> constraints and / or primary keys.
> Then I found that my attachment routines would fail for some of the
> tables - message being (paraphrasing) - "I can't find that table or
> the table name is too long"
> If I went back in to SQL server and shortened the table names down to
> about 20 characters - then the problem went away.
> I even adjusted my Access attachment routines so that I could still
> keep the desired attached table names.
> The problem is more for the client - when they go to point Cognos at
> the SQL tables - they will need to do some reworking (or maybe Cognos
> has an alias facility)
> Can anyone shed any light on this situation.
SQL table names can be up to 128 characters long... I suspect you are seeing
some Access restriction, not a sql restriction.
> 3) When I set up DSN's on my (Win 98) machine - they appear to store
> the user password quite happily.
> On the client's (XP) machine - when my attachment routine runs - it
> appears that the DSN is not holding the password anymore - and we get
> prompted for the password, for every table that is being attached.
> Can anyone explain why ?
XP DSNs hold login/password if you choose NOT to use trusted
authentication... The create DSN walks you through this... Also, make sure
you are using a system DSN, so it will be valid for any user on the
machine...
> Many thanks in advance
> Tony

Data Synchronisation Issue

I am not quite a newbie in the area of SQL server - more "knows just
enough to be dangerous"
- so please be gentle with me.
Cross posted to several groups - apologies if too far off topic
I have an application that does a lot of massaging of data (insurance
claims) from 3 different data sources to present one set of nice
homogeneous output tables. Uses about 200 various tables and about 300
queries to do the work ( sounds like a mess - but trust me - it is
quite disciplined)
We are using 4 * 1Gb mdb backends and the client agrees it is time to
move
the backend to SQL server. They have made it clear that they do not
wish to convert any of the queries to pass thru queries - ie - I am
just replacing my attachments to mdb tables with attachments to SQL
tables. They accept that it will probably run even slower due to the
extra SQL overheads.
I have dealt with most issues in the conversion but the 3 show
stoppers are:
1) Half way through the process I get a "record is deleted" message
when one of the queries attempts to run. I am guessing that there is a
synchronisation problem between any earlier query that empties a table
and an "append" query that refills it, and maybe a subsequent select
query that uses the refilled table.
Can anyone give me some code snippets in access to force a query to
flush all its results to SQL before I embark on the next query.
2) Initially I used the data transformation services to load all the
access backend tables across to SQL server.
I then retweaked my homebrew attachment routines to handle attaching
to an SQL table - everything worked fine.
Of course the tables were not updateable due to a lack of primary keys
in the SQL tables.
No problem - I worked my way through the SQL tables building
constraints and / or primary keys.
Then I found that my attachment routines would fail for some of the
tables - message being (paraphrasing) - "I can't find that table or
the table name is too long"
If I went back in to SQL server and shortened the table names down to
about 20 characters - then the problem went away.
I even adjusted my Access attachment routines so that I could still
keep the desired attached table names.
The problem is more for the client - when they go to point Cognos at
the SQL tables - they will need to do some reworking (or maybe Cognos
has an alias facility)
Can anyone shed any light on this situation.
3) When I set up DSN's on my (Win 98) machine - they appear to store
the user password quite happily.
On the client's (XP) machine - when my attachment routine runs - it
appears that the DSN is not holding the password anymore - and we get
prompted for the password, for every table that is being attached.
Can anyone explain why ?
Many thanks in advance
TonySee inline
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
<ace join_to ware@.iinet.net.au (Tony Epton)> wrote in message
news:41abdc2f.3023390@.news.m.iinet.net.au...
> I am not quite a newbie in the area of SQL server - more "knows just
> enough to be dangerous"
> - so please be gentle with me.
> Cross posted to several groups - apologies if too far off topic
> I have an application that does a lot of massaging of data (insurance
> claims) from 3 different data sources to present one set of nice
> homogeneous output tables. Uses about 200 various tables and about 300
> queries to do the work ( sounds like a mess - but trust me - it is
> quite disciplined)
> We are using 4 * 1Gb mdb backends and the client agrees it is time to
> move
> the backend to SQL server. They have made it clear that they do not
> wish to convert any of the queries to pass thru queries - ie - I am
> just replacing my attachments to mdb tables with attachments to SQL
> tables. They accept that it will probably run even slower due to the
> extra SQL overheads.
> I have dealt with most issues in the conversion but the 3 show
> stoppers are:
> 1) Half way through the process I get a "record is deleted" message
> when one of the queries attempts to run. I am guessing that there is a
> synchronisation problem between any earlier query that empties a table
> and an "append" query that refills it, and maybe a subsequent select
> query that uses the refilled table.
> Can anyone give me some code snippets in access to force a query to
> flush all its results to SQL before I embark on the next query.
>
If you are running a single process, then each query is complete before the
next one begins.

> 2) Initially I used the data transformation services to load all the
> access backend tables across to SQL server.
> I then retweaked my homebrew attachment routines to handle attaching
> to an SQL table - everything worked fine.
> Of course the tables were not updateable due to a lack of primary keys
> in the SQL tables.
> No problem - I worked my way through the SQL tables building
> constraints and / or primary keys.
> Then I found that my attachment routines would fail for some of the
> tables - message being (paraphrasing) - "I can't find that table or
> the table name is too long"
> If I went back in to SQL server and shortened the table names down to
> about 20 characters - then the problem went away.
> I even adjusted my Access attachment routines so that I could still
> keep the desired attached table names.
> The problem is more for the client - when they go to point Cognos at
> the SQL tables - they will need to do some reworking (or maybe Cognos
> has an alias facility)
> Can anyone shed any light on this situation.
SQL table names can be up to 128 characters long... I suspect you are seeing
some Access restriction, not a sql restriction.

> 3) When I set up DSN's on my (Win 98) machine - they appear to store
> the user password quite happily.
> On the client's (XP) machine - when my attachment routine runs - it
> appears that the DSN is not holding the password anymore - and we get
> prompted for the password, for every table that is being attached.
> Can anyone explain why ?
XP DSNs hold login/password if you choose NOT to use trusted
authentication... The create DSN walks you through this... Also, make sure
you are using a system DSN, so it will be valid for any user on the
machine...
> Many thanks in advance
> Tony

Data synchronisation across the Firewall

Narendra,
These articles should help:
http://support.microsoft.com/?id=321822
http://support.microsoft.com/default.aspx?scid=kb;en-
us;287932&Product=sql
Rgds,
Paul Ibison
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
Hello Paul,
thanks for the reply. But in our case, opening any special ports like
1433 for SQL server is out of question. I guess there must be another way
out.
Nor can we afford to have VPN connectivity.
regards
Narendra
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:35e701c4a47c$bc6a7590$a401280a@.phx.gbl...
> Narendra,
> These articles should help:
> http://support.microsoft.com/?id=321822
> http://support.microsoft.com/default.aspx?scid=kb;en->
us;287932&Product=sql
> Rgds,
> Paul Ibison
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
|||Narendra,
you'll need the SQL Server TCP/IP port to be open.
In SQL Server 2005 you can do merge replication over
HTTPS and it is quite simple to set up, but there is no
such thing in SQL Server 2000 (unless you are using CE),
and no file-based solution.
Rgds,
Paul Ibison
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
sql

data synchronisation

Hi,

I'm working with two databases on to physicaly separated servers: #1
is on a local machine en I can only connect to #2 via an ADSL
connection.
I need to syncrhonise the data. This works fine via the wizard from
the Enterprise Manager. But I want to get this done via a script form
the Query Analyzer that I can run whenever I want to. Something like:

insert into srv1.database.dbo.table1
select * from srv2.database.dbo.table1

When I do this I get an error about "linked servers". That surprises
me, because I have no linked servers configured and yet the wizard
works fine?

Can anyone tell me how to accomplish this?

tav
Jan Willemsjhmwillems@.hotmail.com (Jan) wrote in message news:<240176c1.0404060206.4bfaaca3@.posting.google.com>...
> Hi,
> I'm working with two databases on to physicaly separated servers: #1
> is on a local machine en I can only connect to #2 via an ADSL
> connection.
> I need to syncrhonise the data. This works fine via the wizard from
> the Enterprise Manager. But I want to get this done via a script form
> the Query Analyzer that I can run whenever I want to. Something like:
> insert into srv1.database.dbo.table1
> select * from srv2.database.dbo.table1
> When I do this I get an error about "linked servers". That surprises
> me, because I have no linked servers configured and yet the wizard
> works fine?
> Can anyone tell me how to accomplish this?
> tav
> Jan Willems

The Import/Export Data wizard uses a separate connection to each
server, so linked servers are not involved. The INSERT statement above
does require the linked servers to exist, however.

So you could just create the linked servers you need, or if that's not
possible for some reason, then you can save a DTS package from the
wizard, and execute it in whatever way is most convenient:

http://www.sqldts.com/default.aspx?104

Simon

Data SWITCH partition fails with primary key constraint Error

Hi champs,

I am trying to use SWITCH partitions from one fact table out to another identical table. On some tables this does not work as I get an ERROR conserning primary key constraints; is there a way around this without deleting the primary key constraint?

ERROR:

"SWITCH PARTITION 1 TO my_switch_out_table PARTITION 1 " failed with the following error: "ALTER TABLE SWITCH statement failed. SWITCH is not allowed because source table 'my_fact_table' contains primary key for constraint "

/Many thanks

Please post some sample DDL that demonstrates the problem. It will be easier to suggest the solution. You should also take a look at the BOL topic below:

http://msdn2.microsoft.com/en-gb/library/ms191160.aspx

It lists the table, index and constraint requirements for the switch to work.

|||

I have one table that has a two colums as a PK and this table has a PK constraint to one other table and other constraints to 5 other tables.

I've constructed the "OLD_DATA" table as a exact duplicate, including index, of the source table.

However I cannot create the exact same constraints on the destination table, as these already exists in the database.

when I run the following SWITCH, I get an error that

ALTER TABLE dbo.source_table_fact
SWITCH PARTITION 1
TO dbo.OLD_DATA_source_table_fact
PARTITION 1
go

[Execute SQL Task] Error: Executing the query "ALTER TABLE dbo.Ordination_fact SWITCH PARTITION 1 TO dbo.OLD_DATA_MYtable_fact PARTITION 1 " failed with the following error: "ALTER TABLE SWITCH statement failed. SWITCH is not allowed because source table 'dbo.MYtable_fact' contains primary key for constraint 'FK_fact_Mytable_fact'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.

This works fine on most of my fact tables but some SWITCHES will not work.

/Many thanks

|||

Here is the script
/* script start */
USE master
go
IF EXISTS (SELECT name FROM sys.databases WHERE name = N'testdb')
BEGIN
ALTER DATABASE [testdb] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
DROP DATABASE [testdb]
END
go
CREATE DATABASE [testdb]
GO
USE [testdb]
GO

CREATE PARTITION FUNCTION [RangeMonth] (datetime)
AS RANGE RIGHT FOR VALUES (
N'2006-12-01 00:00:00',
N'2007-01-01 00:00:00',
N'2007-02-01 00:00:00',
N'2007-03-01 00:00:00',
N'2007-04-01 00:00:00',
N'2007-05-01 00:00:00'
);
GO

CREATE PARTITION SCHEME [RangeM]
AS PARTITION [RangeMonth] all
TO ([Primary]);

CREATE TABLE dbo.test1
(
A uniqueidentifier NOT NULL,
B uniqueidentifier NULL,
C uniqueidentifier NULL,
ST Datetime NOT NULL,

CONSTRAINT test1_pk
PRIMARY KEY NONCLUSTERED (A ,ST) ON [RangeM] (ST),
) on [RangeM] (ST)
;

CREATE TABLE dbo.Lefttest1
(
A uniqueidentifier NOT NULL,
B uniqueidentifier NULL,
C uniqueidentifier NULL,
ST Datetime NOT NULL,

CONSTRAINT Lefttest1_pk
PRIMARY KEY NONCLUSTERED (A,ST) ON [Primary]
) ON [Primary]
;

CREATE TABLE dbo.test2
(
D uniqueidentifier NOT NULL,
E uniqueidentifier NOT NULL,
F uniqueidentifier NOT NULL,
PartKey Datetime NOT NULL

CONSTRAINT test2_pk
PRIMARY KEY NONCLUSTERED (D,PartKey) ON [RangeM] ([PartKey]),
) ON [RangeM] ([PartKey])
go

CREATE TABLE dbo.Lefttest2
(
D uniqueidentifier NOT NULL,
E uniqueidentifier NOT NULL,
F uniqueidentifier NOT NULL,
PartKey Datetime NOT NULL

CONSTRAINT Lefttest2_pk
PRIMARY KEY NONCLUSTERED (D,PartKey) ON [Primary],
) ON [Primary]

;

CREATE TABLE dbo.test3 (
G uniqueidentifier NOT NULL,
D uniqueidentifier NOT NULL,
PartKey Datetime NOT NULL,
AnotherTime Datetime NOT NULL,

constraint test3_pk
primary key nonclustered (G),

constraint test3_D_PartKey_ref
foreign key (D,PartKey)
references test2(D,PartKey)
on delete cascade
)
;
GO
CREATE TABLE dbo.Lefttest3 (
G uniqueidentifier NOT NULL,
D uniqueidentifier NOT NULL,
PartKey Datetime NOT NULL,
AnotherTime Datetime NOT NULL,

constraint Lefttest3_pk
primary key nonclustered (G) ON [Primary],

constraint Lefttest3_D_PartKey_ref
foreign key (D,PartKey)
references Lefttest2(D,PartKey)
on delete cascade
) ON [Primary]
;
GO

-- Try to switch out the first partition
-- on test1

ALTER TABLE test1
SWITCH PARTITION 1
TO Lefttest1;

-- Try to switch out the first partition
-- on test2
-- Fails with
-- Msg 4967, Level 16, State 1, Line 1
-- ALTER TABLE SWITCH statement failed.
-- SWITCH is not allowed because source table 'testdb.dbo.test2'
-- contains primary key for constraint 'test3_did_ref'.

ALTER TABLE test2
SWITCH PARTITION 1
TO Lefttest2;

Data Summaries(Histograms) in SQL 2005

Hello everyone,
My question is regarding some internals of SQL Server 2005. Does anyone know what kind of histograms are used for estimating query result size? I understand that this is a bit not T-SQL question but didn't really know where to put it :)

Any help is much appreciated.

Thanks,
Ando

Hi Ando,

Lookup BOL for DBCC SHOW_STATISTICS, this should give you the information you need (if I've understood your question).

If you need more details please ask.

Jamie

|||

Hello Jamie,
Thanks for your reply.
DBCC SHOW_STATISTICS shows some information about statistics. My initial question was about underlying data structures(type of histograms). There are limitless techniques, including some novel ones proposed by Microsoft Research. You can search "Query Estimation" on ACM portal for some articles. I was wondering-which one are they using in the SQL2005 :)
Thanks again,
Ando

|||

ah, okay then...can't help with that kind of detail :S

Although I'd also be interested to find out...I'll keep watching this thread :D

Jamie

Data SubSet

Is it possible to perform an SQL query on a ReportingServices DataSet... I
would like to write a statement like this:
SELECT COUNT(DISTINCT ServiceID) FROM RS_DataSet WHERE ServiceStatus = "ACTIVE"
I know it would be easy to create a new DataSet with this query... however
RS it getting this table from an application... not directly from a
database...Couldn't you just filter the dataset that you already have?
"MER78" wrote:
> Is it possible to perform an SQL query on a ReportingServices DataSet... I
> would like to write a statement like this:
> SELECT COUNT(DISTINCT ServiceID) FROM RS_DataSet WHERE ServiceStatus => "ACTIVE"
> I know it would be easy to create a new DataSet with this query... however
> RS it getting this table from an application... not directly from a
> database...|||Well that is what I was talking about... but I can't figure out how I would
do that without making invisible tables that filter and count for each
condition... the table I have displaying my DataSet is supposed to show all
the data... but counting say ACTIVE records... would be a subset of that
DataSet.
"B. Mark McKinney" wrote:
> Couldn't you just filter the dataset that you already have?|||I figured out a solution for my problem... with a little help from Google...
for anyone that might b interested...
=CountDistinct(IIF(Fields!ServiceStatus.Value = 1, Fields!ServiceID.Value,
NOTHING))
"MER78" wrote:
> Is it possible to perform an SQL query on a ReportingServices DataSet... I
> would like to write a statement like this:
> SELECT COUNT(DISTINCT ServiceID) FROM RS_DataSet WHERE ServiceStatus => "ACTIVE"
> I know it would be easy to create a new DataSet with this query... however
> RS it getting this table from an application... not directly from a
> database...

Data style?

Hi, I would like to change the data output style
generated from the SQL query. So, how can I go about
doing it? Example : display 59766 as 59,766 and also
datetime as date.
Please help...thanks a lot!
On Tue, 1 Feb 2005 20:39:35 -0800, Wendy W wrote:

>Hi, I would like to change the data output style
>generated from the SQL query. So, how can I go about
>doing it? Example : display 59766 as 59,766 and also
>datetime as date.
>Please help...thanks a lot!
>
Hi Wendy,
The formatting of the output of a query is entirely up to the client. If
you access the same data using Query Analyser, Enterprise Manager, Access
and some other tools, you'll already see a variety in formats used, as
each of these tools has it's own defaults.
Best, Hugo
(Remove _NO_ and _SPAM_ to get my e-mail address)
|||
>Hi Wendy,
>The formatting of the output of a query is entirely up
to the client. If
>you access the same data using Query Analyser,
Enterprise Manager, Access
>and some other tools, you'll already see a variety in
formats used, as
>each of these tools has it's own defaults.
>Best, Hugo
>--
Thanks, Hugo but my situation is slightly different here.
I'm supposed to display records in a data grip (ASP.NET)
using the SQL statement. I can't format the data grid
column to my desired format, the grid is dynamic.
Therefore I need to format the fields in SQL statement.
How can this be solved?
|||Wendy,
You will have to provide the data as character-based data,
preformatted as needed. You could try various solutions such
as these, shown with an integer variable @.int and the getdate() function
instead of an entire table::
declare @.int integer
set @.int = 32476
select parsename(convert(varchar(20),cast(@.int as money), 1),2)
select convert(char(10),getdate(),101)
select convert(char(10),getdate(),121)
Steve Kass
Drew University
Wendy W wrote:

>to the client. If
>
>Enterprise Manager, Access
>
>formats used, as
>
>
>Thanks, Hugo but my situation is slightly different here.
>I'm supposed to display records in a data grip (ASP.NET)
>using the SQL statement. I can't format the data grid
>column to my desired format, the grid is dynamic.
>Therefore I need to format the fields in SQL statement.
>How can this be solved?
>
sql

Data Structures

(Wow; surfing this site has really illuminated what a lowly hack-programmer I am to this field of SQL and relational processing :S )

I am creating a temp table, doing an Bill of Material explosion for a single Order Line Item.

(note: SQL Server 2000)

I used blindman's "accumulator method (click here) (http://www.dbforums.com/showpost.php?p=6239109&postcount=6)" to generate the entire potential BOM tree.

So; step 1 works wonderfully! :beer: Cheers blindman!

Now; I want to remove unwanted nodes (and all their children) from the temp-table. I have a bunch of functions (0 to 1 for each node) that return a Yes or No. If "No", then flag-to-eliminate immediately that branch and don't revisit anything on it (and therein lies my problem). This will leave me with a temp table of only valid nodes.

Since this is recursive, and since it will involve Dynamic SQL (the function name varies), all I can come up with is using a Cursor in a WHILE loop. Even at that; since a CURSOR is point-in-time (ie: values don't change once selected), I'll have to re-check the current temp table values (or create of 2nd temp table of only deleted nodes and repeat a SELECT with NOT EXISTS in it, hmmm).

Since blindman's method generates a table ordered by level, the sequence of processing is pre-determined - unless I can re-order it into a more traditional hierarchy (1 entire branch at a time) and number the levels, in which case the cursor could just skip to the next branch of equal or higher level.

Note: The thought does occur to me I could have an intermediary function (static name) that in turn does the Dynamic SQL. These functions contain the Business Logic that looks at a myriad of column values and relationships in the database and there's no one-size-fits-all decision tree so Dynamic SQL is necessary.

The max cursor size will be maybe 300, and on average 100. Number of levels will normally be 3 or 4, but conceivably could be up to 10. Given the average 100 potential components/sub-assemblies, the final assembly will be about 30. As a periodic background process; it will do 3,000 Order Line Items a day, so I'm figuring 1 second response time per build is adequate (ie: the user's not waiting on it so it doesn't have to be blinding fast) - however why waste?

Anyhow; I thought this might be a fun problem for some Data Structure genius who wants to give a lesson in Relational Programming.

Thanks for looking.

Here's what I have so far:

CREATE PROCEDURE dbo.sp_ExplodeTest1 (@.recID int = 1) AS
/* tbTestH is a table containing an assembly hierarchy.
Assemblies with no parent are Builds.
Assemblies with no children are Components.
It's columns:
MyID int,
ParentID int,
(other descriptive columns)
*/

declare @.t table (
TempNodeID int identity(1,1)
,MyID int
)

-- Seed the tree with the Build's ID.
insert into @.t (MyID) values (@.recID)

/* This populates the temp table with the entire
Assembly for the given build.
It is Ordered By the level in the assembly.
Number of assembly levels is infinite.

For example:
Level 1 = the Build. It comes first
Level 2 = all parents are level 1. Is next (no particular seq)
Level 3 = all parents are level 2. Is next (no particular seq)
etc.

*/
while @.@.Rowcount > 0
insert into @.t (MyID)
select tbTestH.MyID
from tbTestH
inner join @.t rList on tbTestH.ParentID = rList.MyID
where not exists (
select *
from @.t CurrentList
where CurrentList.MyID = tbTestH.MyID)

-- now to display the results so far.
select t.*, tbTestH.*
from tbTestH inner join @.t t on tbTestH.MyID = t.MyID
order by t.TempNodeID

GOOk, well you base code looks fine. But I don't understand what you are trying to do next.

Do you want to use one or more functions to eliminate a branch and all of its children from the dataset you have created? Why not just embed the function in the code you use to populate the table, filtering out all and excluding all those that fail your function tests?

And thanks for the beer.|||Ok, well you base code looks fine. But I don't understand what you are trying to do next.

Do you want to use one or more functions to eliminate a branch and all of its children from the dataset you have created? Why not just embed the function in the code you use to populate the table, filtering out all and excluding all those that fail your function tests?

And thanks for the beer.
You're welcome and well deserved.

I'm not sure about the ultimate possibilities, but if I'm forced with the cursor approach then you are right, I could just skip them. Deleting from the list (or flagging for deletion, since that would have less overhead) only has value if there will be multiple passes, particularly using cursors. In other words, I don't want to repeat work.

I'm realizing that if I have 3000 Order Line Items to process, that I may be able to do them all in one swoop (attaching the OrderLineItemID to each row of course).

Each node of the final build will have a quantity and size that also require calculation. I've defined 15 different methods for calculating quantity and size (example: same as height, same as width, xref-lookup based on height, xref-lookup based on width, xref-lookup based on height/width combo, hard coded at 1, etc.). As well; each component translates (either directly or via a color-xref lookup) into an actual Product ID.

In cases where the node is not a component (ie: has children), it's quantity is used as a multiplier for it's branch. Example; if there's 4 "ladder assemblies", and each ladder has 2 plastic plugs, then the "plastic plug" quantity will be 8. Therefore; the heirchy probably needs to be traversed in sequence - unless I can think of a fancy single update with group-by statement that zooms it all together.

I'm afraid my brain is incapable of looking at the problem and knowing just how to best solve it. I plan to use this remainder of this week to work that out. If someone here is inclined to help out, I'll be most grateful (hey; I'll hand out a whole page of beers for an elegant solution).

Not sure if I've given enough info for the next step. I guess that would be either
1. Ordering this into hierarchical sequence and assigning level numbers
2. Ordering this into hierarchical sequence and assigning high-low node numbers.
Hopefully; with a nice single-SQL update statement (or series of statements like Blindman's WHILE ... INSERT).

My goal isn't the most elegant solution in the world, but one that'll work pretty well for the problem at hand (BOM, MRP, Forcasting system for 35m/yr custom build-to-order manufacturer). By "well", I mean modestly fast, scalable, maintainable, modular, flexible, and accurate.

Thanks!|||So you want to run different function tests depending upon whether the node has children or not?
I still do not see why you think you need a cursor for this.|||So you want to run different function tests depending upon whether the node has children or not?
I still do not see why you think you need a cursor for this.

I believe it's not 1 level of children.....|||So you want to run different function tests depending upon whether the node has children or not?
I still do not see why you think you need a cursor for this.
Each node, regardless of it's status (Tree, Branch, or Leaf) may be either:
1. Always present, given that it's parent is present.
or
2. Optionally present, dependent upon the Y/N result of a custom function.

The fuction for each node is custom only to that node.

Example:

CREATE FUNCTION fn_YN_10 (parms...)
RETURNS bit AS
BEGIN
DECLARE @.MyAnswer bit
SET @.MyAnswer = 1
IF (SELECT MySpecialFeature from tbLineItem where LineItemID = @.parmLineItemID) = "S"
SET @.MyAnswer = 0

RETURN @.MyAnswer

END

The name "fn_YN_10" is a value on the tbAssy. How can I execute that except within a Cursor? Please illuminate?

Also; I need to skip the entire branch if the answer is 1 (ie: No). One method to do that is have the Cursor order everything in hierarchical sequence(everything for a branch together) and process it in traditional loop fashion and with the levels numbered (tree = 1, next branch = 2, leafs = some higher number). If the answer is 1 (false) then it skips everything until the next node at the same or a lower level.

In skipping a branch; I could possibly add to a 2nd table of "Excluded Branches" and have a select with "NOT EXISTS". I believe there's a way to do that HOWEVER; if I have to use a cursor to get the Dynamic SQL anyway, then the first approach makes the most sense.

Note: If I do process 3000 Line items simultaneously, with avg 100 nodes each, we're talking 300,000 nodes. Now it's time to address performance. If I can avoid a Dynamic SQL Cursor, then by all means, that's the way to go. Not sure if doing all at once will make things faster or just cause indexing problems.

One possible solution: I could create an intermediate fuction to which the Function Name (and applicable parms) is passed. That would have a static name. It could then do the Dynamic SQL. That would avoid forcing the processing to use a CURSOR just because of the need for Dynamic SQL. But then it's back to how I can skip a branch without doing a Cursor Loop.|||I believe it's not 1 level of children.....
Correct:

Level 1 = tree
Level 2 or higher = branch
Level 2 or higher = leaf

Children always have higher number so by definition cannot be 1.|||but are we talking about children, grand children, great grandchildren, ect|||but are we talking about children, grand children, great grandchildren, ect
Yes.

Example of the tree representation:
Tree = no parent
Branch = has parent and child
Leaf = no child

01 (tree's root - level 1)
..02 (leaf - level 2)
..03 (branch - level 2)
...04 (branch - level 3)
.....05 (leaf - level 4)
......06 (leaf - level 5)
..07 (branch - level 2)
...08 (branch - level 3)
.....09 (leaf - level 4)
.....09 (leaf - level 4)
..10 (leaf - level 2)
..11 (branch - level 2)
...12 (leaf - level 3)

The child, father, grandfather, great grandfather traverses in the other direction and as such, has limitations
01 (great grandfather)
..02 (child) oops, this doesn't really work ...
..03 (grandfather)
...04 (father)
.....05 (child) oops, this doesn't work either
......06 (child) grandchild? doesn't fit.
etc

So; Child, Father, Grandfather notation works when referencing the lineage of a single elementry component, but not for representing a jagged hierarchy from it's highest level on down.

I see where child needs to be level 01 and so on using this notation method, ergo the confusion.

My bad for using the Child/Parent notation. I should have stuck with the Tree notation. Sorry about the confusion.

Data structure Internals Question

Hi all,
I have found that adding a column to an existing table (tested on a table
with >1M rows >200 bytes long) behaves as follows:
1. If the added column does not allow NULLs and has a DEFAULT value, the
operation actually changes the row structure.
I can see that from the time it takes to complete and the number of reads
and writes I trace with profiler.
SELECTing from the table shows the default value was correctly entered to
the new column.
So far, so good.
2. If the added column allows NULLs, the operation takes ~0 time and just a
few reads and writes, implying a metadata operation only.
SELECTing from the table shows that NULLs were entered instead of the
DEFAULT value.
This is a documented behavior, I know I can use the "WITH VALUES" option to
force the default values into the columns instead of NULLs which will result
in the same behavior as above (1).
This happens regardless of whether the column has a DEFAULT value and also
regardless of whether the column is fixed length or variable length.
What surprises me about it, is the fact that now I actually have a table
with different row structures.
The old rows (before the column was added) were not changed and the new rows
I INSERT or UPDATE will have a different structure reflecting the newly
added column.
From what I know about the data structures, fixed length columns take up
their size regardless of their actual value and variable length columns
require an additional 2 bytes for the column offset in the row. Both fixed
and variable length columns that allow NULLs require an additional NULL bit
in the NULL bitmap for each row as well.
If I SELECT from the table, I get the NULLs as expected from the old rows as
well but where the **** are they actually stored?
I suspected that since the NULL bitmap always uses 8 bit bytes, perhaps
there was a bit left to spare for the new column (although it doesn't seem
logical to have a NULL bit for a column that does not exist in the row
itslef) so I created a table with exactly 8 nullable columns and added a 9th
one to see if it would take the time to allocate another byte for the NULL
bitmap but it seems not to... it completed almost instantaneously (<100ms)
taking only about 20 reads and 3 writes to complete for a >2500 pages table
(I have cleared the cache before running the ALTER TABLE).
Any insights about what's happening under the hood here will be appreciated.
TIA,
AmiAmi,
It would help a lot if for each of the situations you don't
understand you provide the following:
1. A CREATE TABLE statement that creates the table with
the structure it has before the new column is added.
2. An ALTER TABLE statement that adds the new column.
3. What you see from the ALTER TABLE in terms of reads
and writes from Profiler.
4. What you expected from #3.
5. The statements to create any indexes or constraints that
exist on the table beyond what is in the CREATE TABLE
statement.
You seem to have a good understanding of the way rows are
constructed, and I'm not sure exactly what specific situation
it is that you don't understand. As far as whether there is
a bit in the null bitmap to represent NULLs in a non-existent
column, in some sense, this does exist, but a better way of
saying it is that there are at times unused bits in the NULL
bitmap which are just meaningless filler to pad the data
to byte boundaries; when a new nullable column is added
without VALUES in this case, the change can be represented
without any data changes. One of these spare null-bitmap
bits becomes the null bit for the new column through a
change in metadata and at most a flipping of the bit (I don't
know offhand if spare bits are always zeroed out - the
bits may have to be set with write operations also. The null
values don't take up space in the table beyond the null bit.
Some of this is a little different depending on whether the
column in question is fixed or variable length, or whether
the table has any variable-length columns or not. Whether
the table is a heap (no clustered index) or is its clustered
index (if there is one) may also be an issue, and when data
is actually added, the number of writes may change dramatically
depending on whether adding a new column with VALUES
changes the number of rows that fit on each 8K data page.
I hope you can see why it helps to know the particulars of
the examples that concern you. For more information, you
can look at the topics "Estimating the Size of a Table with a
Clustered Index" and "Estimating the Size of a Table Without
a Clustered Index" in Books Online.
Steve Kass
Drew University
Ami Levin wrote:

>Hi all,
>I have found that adding a column to an existing table (tested on a table
>with >1M rows >200 bytes long) behaves as follows:
>1. If the added column does not allow NULLs and has a DEFAULT value, the
>operation actually changes the row structure.
>I can see that from the time it takes to complete and the number of reads
>and writes I trace with profiler.
>SELECTing from the table shows the default value was correctly entered to
>the new column.
>So far, so good.
>2. If the added column allows NULLs, the operation takes ~0 time and just a
>few reads and writes, implying a metadata operation only.
>SELECTing from the table shows that NULLs were entered instead of the
>DEFAULT value.
>This is a documented behavior, I know I can use the "WITH VALUES" option to
>force the default values into the columns instead of NULLs which will resul
t
>in the same behavior as above (1).
>This happens regardless of whether the column has a DEFAULT value and also
>regardless of whether the column is fixed length or variable length.
>What surprises me about it, is the fact that now I actually have a table
>with different row structures.
>The old rows (before the column was added) were not changed and the new row
s
>I INSERT or UPDATE will have a different structure reflecting the newly
>added column.
>From what I know about the data structures, fixed length columns take up
>their size regardless of their actual value and variable length columns
>require an additional 2 bytes for the column offset in the row. Both fixed
>and variable length columns that allow NULLs require an additional NULL bit
>in the NULL bitmap for each row as well.
>If I SELECT from the table, I get the NULLs as expected from the old rows a
s
>well but where the **** are they actually stored?
>I suspected that since the NULL bitmap always uses 8 bit bytes, perhaps
>there was a bit left to spare for the new column (although it doesn't seem
>logical to have a NULL bit for a column that does not exist in the row
>itslef) so I created a table with exactly 8 nullable columns and added a 9t
h
>one to see if it would take the time to allocate another byte for the NULL
>bitmap but it seems not to... it completed almost instantaneously (<100ms)
>taking only about 20 reads and 3 writes to complete for a >2500 pages table
>(I have cleared the cache before running the ALTER TABLE).
>Any insights about what's happening under the hood here will be appreciated
.
>TIA,
>Ami
>
>|||Hi Steve,
Thanks for your prompt reply.
here is a similar script to the one i have used for the last test to try and
focus the question:
CREATE TABLE T1
(
PK_col INT NOT NULL IDENTITY PRIMARY KEY,
Col1 INT NULL,
col2 INT NULL,
col3 INT NULL,
col4 INT NULL,
col5 INT NULL,
col6 INT NULL,
col7 INT NULL,
col8 INT NULL
)
GO
SET NOCOUNT ON
DECLARE @.c INT
SET @.c = 0
WHILE @.c <= 250000
BEGIN
INSERT T1
SELECT @.c,@.c,@.c,@.c,@.c,@.c,@.c,@.c
SET @.c = @.c + 1
END
GO
SELECT dpages
FROM sysindexes
WHERE id = OBJECT_ID('T1')
-- 1420 pages
GO
DBCC DROPCLEANBUFFERS
GO
-- start profiler trace
ALTER TABLE T1 ADD col9 INT NULL
GO
-- 46ms duration, 27 Reads, 3 Writes
Since the ALTER TABLE took only 27 reads, it obviously couldn't have changed
all 1420 pages.
Since the original table had exactly 8 nullable columns, it should have
filled the default 1 byte allocated for the null bitmap.
My questions are:
Where are the null values stored for the existing 250,000 rows?
If I execute the following statements:
INSERT T1
SELECT 1,1,1,1,1,1,1,1,1
GO
Now I have a new row that is physically about 9X4 bytes long (ignoring row
overhead) and 2 bytes of null bitmap.
The other 250K rows are 8X4 bytes long with 1 byte null bitmap.
How does SQL Server know how to distinguish between the 2 different
structures?
The only way it can do that IMHO is to "assume" that if the physical row
strcuture is missing a column compared to the table metadata, that column
must be NULL.
When updating, does it compare the table metadata with the actual row
structure it encounters while reading the pages and change the row structure
"on the fly" if necessary?
Things will get much more complicated now if for example I will do:
1. INSERT a few rows with values for all 9 columns
2. DROP col3 (or any other column) - I know that drop column is also a
metadata only operation, the column will not be physically removed.
3. ADD col10
4. UPDATE some rows adding values to the new columns
4. INSERT more rows
5. ... etc. etc.
I've read quite a few books and articles on SQL Server internals but I can't
remember ever encountering any articles relating to this issue.
I can start digging into it by examining DBCC PAGE internal structures but I
was hoping someone here already knows how it works under the hood to save me
the time and effort :-)
TIA
Ami
"Steve Kass" <skass@.drew.edu> wrote in message
news:OkQfZDuAFHA.2640@.TK2MSFTNGP14.phx.gbl...
> Ami,
> It would help a lot if for each of the situations you don't
> understand you provide the following:
> 1. A CREATE TABLE statement that creates the table with
> the structure it has before the new column is added.
> 2. An ALTER TABLE statement that adds the new column.
> 3. What you see from the ALTER TABLE in terms of reads
> and writes from Profiler.
> 4. What you expected from #3.
> 5. The statements to create any indexes or constraints that
> exist on the table beyond what is in the CREATE TABLE
> statement.
> You seem to have a good understanding of the way rows are
> constructed, and I'm not sure exactly what specific situation
> it is that you don't understand. As far as whether there is
> a bit in the null bitmap to represent NULLs in a non-existent
> column, in some sense, this does exist, but a better way of
> saying it is that there are at times unused bits in the NULL
> bitmap which are just meaningless filler to pad the data
> to byte boundaries; when a new nullable column is added
> without VALUES in this case, the change can be represented
> without any data changes. One of these spare null-bitmap
> bits becomes the null bit for the new column through a
> change in metadata and at most a flipping of the bit (I don't
> know offhand if spare bits are always zeroed out - the
> bits may have to be set with write operations also. The null
> values don't take up space in the table beyond the null bit.
> Some of this is a little different depending on whether the
> column in question is fixed or variable length, or whether
> the table has any variable-length columns or not. Whether
> the table is a heap (no clustered index) or is its clustered
> index (if there is one) may also be an issue, and when data
> is actually added, the number of writes may change dramatically
> depending on whether adding a new column with VALUES
> changes the number of rows that fit on each 8K data page.
> I hope you can see why it helps to know the particulars of
> the examples that concern you. For more information, you
> can look at the topics "Estimating the Size of a Table with a
> Clustered Index" and "Estimating the Size of a Table Without
> a Clustered Index" in Books Online.
> Steve Kass
> Drew University
>
> Ami Levin wrote:
>
a
to
result
also
rows
fixed
bit
as
seem
9th
NULL
(<100ms)
table
appreciated.|||Hi Ami,
My guess is that NULLs for the new columns aren't stored anywhere, and that
the algorithm for reading data is something like this:
- read metadata
- read row
- if you find less columns than expected, assume NULL for missing columns
It's easy to figure out whether there are missing columns since the row
internally holds the actual number of fixed/variable length columns.
When the row is actually modified for any purpose, then all postponed
physical changes take place.
If you want, I can post this in the private MVP forum to get the developers
feedback, but I suspect it won't be much different than this.
Let me know...
BG, SQL Server MVP
www.SolidQualityLearning.com
"Ami Levin" <XXX_NOSPAM___XXX__amlevin@.mercury.com> wrote in message
news:%23GjYwBvAFHA.3988@.TK2MSFTNGP11.phx.gbl...
> Hi Steve,
> Thanks for your prompt reply.
> here is a similar script to the one i have used for the last test to try
> and
> focus the question:
> CREATE TABLE T1
> (
> PK_col INT NOT NULL IDENTITY PRIMARY KEY,
> Col1 INT NULL,
> col2 INT NULL,
> col3 INT NULL,
> col4 INT NULL,
> col5 INT NULL,
> col6 INT NULL,
> col7 INT NULL,
> col8 INT NULL
> )
> GO
> SET NOCOUNT ON
> DECLARE @.c INT
> SET @.c = 0
> WHILE @.c <= 250000
> BEGIN
> INSERT T1
> SELECT @.c,@.c,@.c,@.c,@.c,@.c,@.c,@.c
> SET @.c = @.c + 1
> END
> GO
> SELECT dpages
> FROM sysindexes
> WHERE id = OBJECT_ID('T1')
> -- 1420 pages
> GO
> DBCC DROPCLEANBUFFERS
> GO
> -- start profiler trace
> ALTER TABLE T1 ADD col9 INT NULL
> GO
> -- 46ms duration, 27 Reads, 3 Writes
> Since the ALTER TABLE took only 27 reads, it obviously couldn't have
> changed
> all 1420 pages.
> Since the original table had exactly 8 nullable columns, it should have
> filled the default 1 byte allocated for the null bitmap.
> My questions are:
> Where are the null values stored for the existing 250,000 rows?
> If I execute the following statements:
> INSERT T1
> SELECT 1,1,1,1,1,1,1,1,1
> GO
> Now I have a new row that is physically about 9X4 bytes long (ignoring row
> overhead) and 2 bytes of null bitmap.
> The other 250K rows are 8X4 bytes long with 1 byte null bitmap.
> How does SQL Server know how to distinguish between the 2 different
> structures?
> The only way it can do that IMHO is to "assume" that if the physical row
> strcuture is missing a column compared to the table metadata, that column
> must be NULL.
> When updating, does it compare the table metadata with the actual row
> structure it encounters while reading the pages and change the row
> structure
> "on the fly" if necessary?
> Things will get much more complicated now if for example I will do:
> 1. INSERT a few rows with values for all 9 columns
> 2. DROP col3 (or any other column) - I know that drop column is also a
> metadata only operation, the column will not be physically removed.
> 3. ADD col10
> 4. UPDATE some rows adding values to the new columns
> 4. INSERT more rows
> 5. ... etc. etc.
> I've read quite a few books and articles on SQL Server internals but I
> can't
> remember ever encountering any articles relating to this issue.
> I can start digging into it by examining DBCC PAGE internal structures but
> I
> was hoping someone here already knows how it works under the hood to save
> me
> the time and effort :-)
> TIA
> Ami
>
> "Steve Kass" <skass@.drew.edu> wrote in message
> news:OkQfZDuAFHA.2640@.TK2MSFTNGP14.phx.gbl...
> a
> to
> result
> also
> rows
> fixed
> bit
> as
> seem
> 9th
> NULL
> (<100ms)
> table
> appreciated.
>|||Hi Itzik :-)
How are you?
I have guessed the same (I wrote it in my reply to steve) but i would like
to know how it really works.
Our "guess" seems good enough for the simple script I have used but what
about the more complex situation if after adding col9 we will
1. INSERT a few rows with values for all 9 columns
2. DROP col3 - physical data will not be removed...
3. ADD col10
4. UPDATE some rows with values to the new column
5. DROP col5
5. Add col11
6. INSERT more rows
7. ... etc. etc.
can you see a way to do it without keeping extra metadata?
Let us know what the developers say.
I find this very interesting.
See you soon,
Ami
--
"Itzik Ben-Gan" <itzik@.REMOVETHIS.SolidQualityLearning.com> wrote in message
news:ecderKvAFHA.824@.TK2MSFTNGP11.phx.gbl...
> Hi Ami,
> My guess is that NULLs for the new columns aren't stored anywhere, and
that
> the algorithm for reading data is something like this:
> - read metadata
> - read row
> - if you find less columns than expected, assume NULL for missing columns
> It's easy to figure out whether there are missing columns since the row
> internally holds the actual number of fixed/variable length columns.
> When the row is actually modified for any purpose, then all postponed
> physical changes take place.
> If you want, I can post this in the private MVP forum to get the
developers
> feedback, but I suspect it won't be much different than this.
> Let me know...
> --
> BG, SQL Server MVP
> www.SolidQualityLearning.com|||Ami, here's what I got back:
"* Add nullable column -- This does not require any data update because the
way the row format is setup, the newly added nullable column assumes NULL
value by default. Similarly, when a
not-nullable column is altered to be nullable, no update is required
because the row format is capable of storing a nullbit for all columns.
* Drop column -- The dropped column(s) continue to consume space (even if
rows are updated or inserted(fixed-len cols)) until the clustered index is
rebuilt. This includes the LOB allocations, and they are only deallocated
on rebuild. For heaps, dbcc cleantable will reclaim space used by dropped
LOB and variable-length columns only.
In Shiloh, we did not have a case where we postponed a data-update due to a
schema-change until the time the row is touched. In Yukon, we do postpone
pushing/pulling data out-of-row/inrow when the setting is changed for the
new max LOB types.
For dropped columns we did not store any information in Shiloh except for a
flag in sysindexes indicating that LOB deallocation/cleanup is required.
Evidence of dropped columns could be deduced from the offsets allocated to
existing columns. As far as compacting the rows is concerned, this occurs
automatically as part of index rebuild. In Yukon, we now store a bit more
information about dropped columns.
Bottom-line, we don't really have a general concept of "postponed or lazy
schema changes", except for the couple of cases mentioned above which only
Sort-of fall into this category."
BG, SQL Server MVP
www.SolidQualityLearning.com
"Ami Levin" <XXX_NOSPAM___XXX__amlevin@.mercury.com> wrote in message
news:%23fB$chvAFHA.2016@.TK2MSFTNGP15.phx.gbl...
> Hi Itzik :-)
> How are you?
> I have guessed the same (I wrote it in my reply to steve) but i would like
> to know how it really works.
> Our "guess" seems good enough for the simple script I have used but what
> about the more complex situation if after adding col9 we will
> 1. INSERT a few rows with values for all 9 columns
> 2. DROP col3 - physical data will not be removed...
> 3. ADD col10
> 4. UPDATE some rows with values to the new column
> 5. DROP col5
> 5. Add col11
> 6. INSERT more rows
> 7. ... etc. etc.
> can you see a way to do it without keeping extra metadata?
> Let us know what the developers say.
> I find this very interesting.
> See you soon,
> Ami
> --
> "Itzik Ben-Gan" <itzik@.REMOVETHIS.SolidQualityLearning.com> wrote in
> message
> news:ecderKvAFHA.824@.TK2MSFTNGP11.phx.gbl...
> that
> developers
>|||Thanks Itzik.
That does not explain the "how" but we'll have to settle for it i guess :-)
Thanks again,
Ami
"Itzik Ben-Gan" <itzik@.REMOVETHIS.SolidQualityLearning.com> wrote in message
news:uN7eb6VBFHA.3596@.TK2MSFTNGP12.phx.gbl...
> Ami, here's what I got back:
> "* Add nullable column -- This does not require any data update because
> the
> way the row format is setup, the newly added nullable column assumes NULL
> value by default. Similarly, when a
> not-nullable column is altered to be nullable, no update is required
> because the row format is capable of storing a nullbit for all columns.
> * Drop column -- The dropped column(s) continue to consume space (even if
> rows are updated or inserted(fixed-len cols)) until the clustered index is
> rebuilt. This includes the LOB allocations, and they are only deallocated
> on rebuild. For heaps, dbcc cleantable will reclaim space used by dropped
> LOB and variable-length columns only.
> In Shiloh, we did not have a case where we postponed a data-update due to
> a
> schema-change until the time the row is touched. In Yukon, we do postpone
> pushing/pulling data out-of-row/inrow when the setting is changed for the
> new max LOB types.
> For dropped columns we did not store any information in Shiloh except for
> a
> flag in sysindexes indicating that LOB deallocation/cleanup is required.
> Evidence of dropped columns could be deduced from the offsets allocated to
> existing columns. As far as compacting the rows is concerned, this occurs
> automatically as part of index rebuild. In Yukon, we now store a bit more
> information about dropped columns.
> Bottom-line, we don't really have a general concept of "postponed or lazy
> schema changes", except for the couple of cases mentioned above which only
> Sort-of fall into this category."
> --
> BG, SQL Server MVP
> www.SolidQualityLearning.com
>
> "Ami Levin" <XXX_NOSPAM___XXX__amlevin@.mercury.com> wrote in message
> news:%23fB$chvAFHA.2016@.TK2MSFTNGP15.phx.gbl...
>