Saturday, February 25, 2012

Data Migration Oracle 7.16 to SQL Server 2000

Hi,

I have been asked to look at optimising a Data Migration Strategy for one of our clients. I am quite new to Data Migration (I'm actually a system designer!).

The migration is from a legacy Oracle 7.16 Db to a SQL Server 2000 environment.

The legacy Db has some large tables - 30 Million+ rows

Table/column Mapping is 1:1

At present we are using DTS to import into the SQL environment.

My question is: Is there any way to leverage the Oracle 7.16 export to speed up the migration process?

Any advice would be appreciated.

Many thanks

Mike

this forum is for sql server 2005 integration services...not sql server 2000 data transformation services.

i suggest that you post your question to the microsoft dts newsgroup.

Data migration MSDE -> SQL Compact

Hi all

Currently we are using the MSDE as SQL Server and decide to switch to use the smaller and file based SQL Server 2005 Compact Edition.

Because of update issue we need to merge data from an already installed database into the new compact edition. Are there some tools we could use in our installation routine (InstallShield) or are there some recommendations how we could do this data merging?

Thanks @. all
Daniel

Hello Daniel,

I am going to begin investigating this issue myself very soon and I'll be more than happy to provide you with any info. I'm amazed that nobody has gotten to you, especially since you made this post well over a month ago. Have you received any information from others that might be relevant here? If so, I'd really like to hear your input.

Thanks,

Corey

|||

Please see the reply from Pragya on this thread: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1300136&SiteID=1

"To make this migration task really easy, we are coming out of a data migration wizard soon!"

Data migration MSDE -> SQL Compact

Hi all

Currently we are using the MSDE as SQL Server and decide to switch to use the smaller and file based SQL Server 2005 Compact Edition.

Because of update issue we need to merge data from an already installed database into the new compact edition. Are there some tools we could use in our installation routine (InstallShield) or are there some recommendations how we could do this data merging?

Thanks @. all
Daniel

Hello Daniel,

I am going to begin investigating this issue myself very soon and I'll be more than happy to provide you with any info. I'm amazed that nobody has gotten to you, especially since you made this post well over a month ago. Have you received any information from others that might be relevant here? If so, I'd really like to hear your input.

Thanks,

Corey

|||

Please see the reply from Pragya on this thread: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1300136&SiteID=1

"To make this migration task really easy, we are coming out of a data migration wizard soon!"

Data migration into SQL server 2005 from xml files

Friends

Any one of you please provide step by step info for migrating sql server 2005 from xml files.

cheers :)

satish

Try to provide as much detail as you can so that someone can help you.

Data migration in MSDE

Hi
I have a set of distributors who use MSDE as their
database and have VB as the front end. I am planning to
have a second version to my application.
During this upgrade new tables are being added, old
tables modified and some tables modified.
What would be the best approach for this migration.
Rgds
Kudla
hi Kudla,
"Kudla" <johnvellaip@.yahoo.com> ha scritto nel messaggio
news:135201c4a13d$18f54260$a401280a@.phx.gbl
> Hi
> I have a set of distributors who use MSDE as their
> database and have VB as the front end. I am planning to
> have a second version to my application.
> During this upgrade new tables are being added, old
> tables modified and some tables modified.
> What would be the best approach for this migration.
in order to install both a database from scratch, or updating existing ones,
I do personally use another companion (still VB6, at the moment) application
provided with our main apps..
this other application, via SQLDMO (but you can use whatever access method
of your choice like ADO, Ado.Net, ..) will create/update the database(s)
from a reserved defintion file described like:
[Database]
Name=xxx
DataName=logical_name|physical_name
DataNameN=logical_name|physical_name ; for additional .Ndf files
LogTo=logical_name|physical_name
LogToN=logical_name|physical_name ; for additional .Ldf files
other key to specify additional settings
each database is described in it's structure by another reserved file which
list all actions, .sql DDL files, BCP to be executed, like
<BOF>
SQL=\Tables\MasterTables.sql
'which is a T-SQL DDL file including CREATE TABLE statements
SQL=\Tables\otherTB.sql
SQL=\Views\Views.sql
SQL=\SP\StoredProcedure.sql
.....
SQL=\general\CreateLogins.sql
SQL=\general\Grant.sql
SQL=\general\Deny.sql
.....
SQL=\Populate\TableXXX.sql
'which is a T-SQL INSERT INTO file to load tables
....
BCPIN=\PopulateBCP\tableYYY.txt|table_name|
'which performs a BCP IN bulk load
....
EXEC=INSERT INTO .....
EXEC=UPDATE ...
'which are T-SQL statements to be directly performed
....
<EOF>
personally I based this app on SQLDMO... the app will parse the file, read
references to the described files (\Tables\MasterTables.sql) and execute
it's contained statements or perform the appropriate task based on the
action prefix..
this app takes command line parameters in order to upgrade existing schema
too, to subsequent schema changes stored in version folders like
\00000001\
\00000002\
\0000000n\
the migration path will then generate the new objects executing all listed
and required DDL scripts and perform migration via standard INSERT
statements, or first provide BCP out with successive BCP in, or generate
INSERT INTO DML scripts for specific tables to be later executed...
Red-Gate now provides a new product that should be able to perform the same
task... have a look at http://www.red-gate.com/sql/sql_packager.htm
Andrea Montanari (Microsoft MVP - SQL Server)
http://www.asql.biz/DbaMgr.shtmhttp://italy.mvps.org
DbaMgr2k ver 0.9.1 - DbaMgr ver 0.55.1
(my vb6+sql-dmo little try to provide MS MSDE 1.0 and MSDE 2000 a visual
interface)
-- remove DMO to reply

data migration from sybase 8.0 to sqlserver 2005




Hi all,

Here i had a task to migrate sybase 8.0 database to mssqlserver 2005 .how can i migrate this using INTEGRATED SERVICES (SSIS) or any other options. .Please try to provide some basical info because i am new to sybase versions.

JSR2005 wrote:




Hi all,

Here i had a task to migrate sybase 8.0 database to mssqlserver 2005 .how can i migrate this using INTEGRATED SERVICES (SSIS) or any other options. .Please try to provide some basical info because i am new to sybase versions.

data migration from sql server 2000 to 7.0

Hi ,
I want to transfer the data from SQL server 2000 to a SQL server 7.0
Can anyone help me in providing the steps I need to follow..
Thanks in advanceHow much data, how often, how much time allowed?

Easiest way which is still low impact probably is to produce a staging table on the 2000 server with an identity and the data that has to be transferred.
The v7 server has an identical table but with an int for the ID field.
Thev7 calls a stored proc on 2000 with its last ID via a linked server or remote server (could even use replication for this but I wouldn't advise it). This sp returns the next 500 say (recs) which the v7 server inserts into its staging table.
The v7 server can then do whatever it wishes with the data (may have a status field on the recs to say they have been processed.

This has very little impact on the 2000 server as it's production processes just have to get data into the staging table - doesn't matter if the v7 server is down.
If the v7 server has to be restored then it will automatically recover the data from the v2000 server. You will have to cater for the 2000 server being restored though.

Data migration from MSAccess to SQL Express 2005

Hi ,

I have a requirement to migrate the data from an existing MS Access database to a newly designed SQL Express 2005 database . Need less to say the table structures in both are totally different.I would like to know how can i handle a scenerio where i want to map table A in access to table B in SQL express (the schema of both different and the number of columns can vary too) , how do i migrate the data from table A in Access to Table B in SQL express using SSMA?

Also i would appreciate if some one can tell me is SSMA the right tool for this , or should i use the upsizing wizard of MS Access. The constraint here is that the data needs to be migrated to a completely new schema. I just need to migrate data only and no other objects.

Thanks

Mahesh

Hello,

I am not replying here with any solution as such.

I would like to do same thing.

I have built complete application using MS Access 2003. Some of the highlights of this application are:

Customized login for each user without using User Level Workgroup Security features.

Each user is assigned 1 of 10 different roles. One of the roles is Admin role

Only Admin role has access to database window and all objects like tables, queries, forms, macros, modules etc.

Shift key is disabled so no one can access database window.

Admin can enabled shift key and get temporary access to database window. Shift key gets disabled on exit again.

Application has data capture front-end forms, one-click reports, quick query tool using front-end forms without query grid etc.

Only certain role can add new data, only certain role can edit data, data gets locked after certain time or status of data etc. Only ceratin role can upload/downlaod data etc.

Application is also password protected. Regular user can open application without knowing password as password is integrated in vba code. This password is essential as no one can export data from other database.

Currently all tables are stored in seperate database and linked in main application.

I would like to move all tables to SQL Server Express. I am assuming that by doing this I will be able to secure all tables better and it will also help me increasing size of application beyond 2 GB.

Please let me know step by step process to move Access Tables to SQL server express.

Thanks

|||

Hi Mahesh,

refer http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1056639&SiteID=1 which is answered.

Welcome on a board Adukio,

using SSMA you may migrate your Access DB to SQL 2005 Refer the thread http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1056639&SiteID=1

I would suggest to refer this thread too http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1033679&SiteID=1

Hemantgiri S. Goswami

|||

Hemant,

Thanks for the reply. In fact i had downloaded the SSMA and trying a few things. I don't see a option where in some kind of a column mapping can be done in this tool. What i mean is that :

Table1 (Access Tale) Table2 (SQL Express Table)

Column1 Column1

Column2 Column2

Column3

If we assume a scenerio like the one above where i need to migrate table from a access table to SQL table , if the number of columns do not match (this is very much possible as my target schema has been completely redesigned) , SSMA fails to migrate the data. So i need to know is there a provision for handling a scenerio like this in SSMA?

Thanks

Mahesh

|||

Hi Mahesha,

SSMA does not handle data transformation, which is what you're wanting to do. (The Access Upsizing Wizard doesn't do this either.) The only SQL tool I know of that can do this is SSIS, wich is not included in SQL Express. If you have another version of SQL Server 2005 available, say SQL Dev, you can use SSIS to create a data transformation.

If you don't have another edition of SQL available, you will need to do this manually. I would suggest migrating the data from Access to a new database on your SQL Server, and then use queries to transform the data into your new tables. If this is a process you have to do regularly you can probably work out a process of pulling data into temporary tables and then appending them to your new schema all using Stored Procedures.

Mike

|||

Thanks Mike, I think that answers my query. The data migration is a one time activity here , so i think i don't really need to use temp tables here. May be i'll go with queries and stored procedures.

Thanks!

Mahesh

Data migration from Access to SQL server 2005

Hi,

I am trying to migrate the data from the Access database to Microsoft SQL server on my machine using some scripts. I have enabled the OPENROWSET and OPENDATASOURCE support through the surface area configuration. The script that accesses the excel file works fine but when i run the script that trys to open the .mdb file it gives me following error.

OLE DB provider "MSDASQL" for linked server "(null)" returned message "[Microsoft][ODBC Microsoft Access Driver] Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt.".

Msg 7303, Level 16, State 1, Line 35

Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "(null)".

Any help would be greatly appreciated. Thanks.

-padu

Make sure that the startup account for sqlserver is not a service account (i.e. localsystem/service/etc.). Just use a domain acct.|||

Hi oj,

How do you check the startup account is not a service account? I am pretty new to SQL server.

Thanks.

-padu

|||You can run SQL Server Configuration Manager. It allows you to change service account there (if desired).

Data Migration Error - SQL TYPE Variant data

I am trying to migrate our Portals database from SQL2000 to SQL2005, but I received "SQL Type Variant Data" error during the data migration with some database. Can anyone help me with this?

Thanks,

Jay

If you explain how you are trying to do the migration and what the error message is then maybe someone will be able to help.

-Jamie

Data Migration :- SQL Server1 - SQL Server2

Hi,
I have 2 similar sql server databases DB1 and DB2 with around 450 tables and much data. My problem is I need to copy specific records from all tables in DB1 to corresponding tables in DB2. What I have done right now is, running seperate INSERT scripts for each table like

INSERT INTO DB2..table1 SELECT * from DB1..table1 where code='XX'

I would like to know whether this is the right approach or any other better way or tool available to do so. Also since the no of records are very high, I insert it in blocks say 30,000 records each, so that log file limit will not create problem.

Thanks in advance. Please help

I would use integration Service for a job like this. I nice easy way to start this is to rightclick the database -> tasks ->export data and follow the wizard.

The outcome from this will be an integration service packagde that following can be edited in SQL Server Business Intelligence Development Studio as a normal Integration service object.

Take a look into the toturials in there. They helped me a lot.

see also this http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=631236&SiteID=1

|||

other alternatives could be

define views and bcp the data out from DB1 and in to DB2

copy the database files (they should not be accessible) and attach them as DB2

Integration Services and use the Database Copy Task

Regards

Norbert

|||

you can use snapshot replication with row filter

data migration - new server uses different collation

Latin1_General_CI_AS on new server
All database created on old server use SQL_Latin1_General_CP1_CI_AS
I understand its can be a complete process to change the default collation
on my new server to SQL_Latin1_General_CP1_CI_AS
Can someone please help me understand what i need to do ?
Thanks for your time
Scott
SQL 2005 standard.i can use this on new DBs to make sure they use correct COLLATION (i.e same
as all user dbs.)
COLLATE SQL_Latin1_General_CP1_CI_AS
BUT my system dbs now use this > Latin1_General_CI_AS on new server.
What problems can i expect in future if these COLLATIONS differ on user /
system dbs ?

Data Migration - Insert Script / Fresh install

Hi

I have some tables in my SQL Server 2005 database. I need to create insert scripts for them. Since these tables contain the CLOB and BLOB values and the number of records is huge, I need to devise a way to install this table data on any new database(Assuming the new database is created and the table structure exists) .Can some one suggest me how I can do this ?

DTS or SSIS is out of question because least manual interference is required.

Thanks & Regards
ImtiazYou could use BCP to export the data into text/binary files and then import it back. This is the most efficient way. Generating insert statements is not cumbersome (there are some 3rd party tools that does this). But the files will still be huge and difficult to manage.

Data Migration

Hi
We are in the process of migrating Data from SQL SERVER 2000 to
Oracle 9i.

Can anyone let me know what are the steps that needs to be followed
in achieving the same.

If there's any specific documentation also available , please let me know
Any pointers would be of great help.

Thanks
Shankar GokuleI think you'd get a more sympathetic ear over here:

http://www.dbforums.com/f4/

Besides...migration is migrattion...

Copy structure (in ERWin is nice), forward engineer (change the code to match the platform)...dump data, load to stage, audit data, load new structure...

Data Migration

Transfering SQL database into Acess.
Problmes:
1. Identity column is not getting converted automitaclly into AutoNumber.
2. Default Value not retained.
is there any way out to get this.What version of SQL Server and Access are you using? Also what process of converting the data are you using??|||hi
i am using SQL server 2000, Access 2000.
nilesh

data migration

Dear All,
I'm having to manage data migration from a proprietary db system to an in-house designed sql server db.
Is there anyone out there who dealt with this and used software or services of companies specialising in this?
All ideas are welcome, thanks in advance!I would export the data as csv or bunch of insert statements and push them into the new db via bulk insert. thats the most db system independent way, I know and it only needs a dozen lines of code.|||The challenge is mapping data elements and logic between the two systems. And you will need to do custom coding for that. Probably a LOT of custom coding.
My advice would be to load the legacy data into staging table in your new database, and then draft sql procedures or scripts to transform it to your new schema. This way the process is repeatable if logical errors occur.
And my advice would be to get a competent DBA to help you with this and expect it to take a couple weeks to get it right.|||I've done something similar to this before.

Basically, what was needed then was to migrate all data from a proprietory db to MSSQL. Luckily, we were able to connect to that database through ODBC. We added that to MSSQL and from there wrote scripts to do the migration.

I agree with blindman, you may need a lot of coding to do this. We spent few weeks to do it cause almost everything (table structures, fields, etc) is different.

It's not difficult but you may need time to do it.

Data Migration

I have an interesting situation. We are attempting to standardize our data
models. Our database is large, with over 100 tables, and 50 Gigs of data.
Our differences between various clients is subtle, but again the volume of
tables makes it challenging.
What I am looking for is a tool that will migrate data from an older version
of a database to a top level version of the same database.
This is SQL Server 2000 SP 3a to SQL Server 2000 SP 3a.
Does anyone know of such a tool?
http://www.aspfaq.com/show.asp?id=2209
Andrew J. Kelly SQL MVP
"Michael @. SGMS" <MichaelSGMS@.discussions.microsoft.com> wrote in message
news:803B71E8-13B9-4F9B-889F-2A2DE65B7DA7@.microsoft.com...
>I have an interesting situation. We are attempting to standardize our data
> models. Our database is large, with over 100 tables, and 50 Gigs of data.
> Our differences between various clients is subtle, but again the volume of
> tables makes it challenging.
> What I am looking for is a tool that will migrate data from an older
> version
> of a database to a top level version of the same database.
> This is SQL Server 2000 SP 3a to SQL Server 2000 SP 3a.
> Does anyone know of such a tool?
|||This does not help. This only, unless I missed it, will give me the
difference in structure......What about moving the data?
"Andrew J. Kelly" wrote:

> http://www.aspfaq.com/show.asp?id=2209
>
> --
> Andrew J. Kelly SQL MVP
>
> "Michael @. SGMS" <MichaelSGMS@.discussions.microsoft.com> wrote in message
> news:803B71E8-13B9-4F9B-889F-2A2DE65B7DA7@.microsoft.com...
>
>
|||Michael @. SGMS wrote:
> This does not help. This only, unless I missed it, will give me the
> difference in structure......What about moving the data?
>
Couldn't you script these changes using T-SQL and modify as needed per
client? Another option is to use DTS.
David Gugick
Quest Software
www.imceda.com
www.quest.com
|||My first thought was to use a linked server (where the data I want to migrate
is) and then so something like:
Insert into newdb..TableA
Select * from olddb..TableA
BUT, there are contraints on the new database that cause these to fail. I
was hoping there was something out there that would script the dropping on
the indexes and keys on the new db, then the data move, then the creation of
the keys and constraints on the new db.
I already have a script that will make the old server structure like the
new...
"David Gugick" wrote:

> Michael @. SGMS wrote:
> Couldn't you script these changes using T-SQL and modify as needed per
> client? Another option is to use DTS.
> --
> David Gugick
> Quest Software
> www.imceda.com
> www.quest.com
>
|||There isn't anything that I know of that will do all of that automatically.
The tool package from www.red-gate.com has a tool to compare the schema and
one to compare data. I know the schema tool will generate change scripts
but I don't know if it can generate data change scripts. That's usually
highly dependant on the schema and business rules.
Andrew J. Kelly SQL MVP
"Michael @. SGMS" <MichaelSGMS@.discussions.microsoft.com> wrote in message
news:82853B5B-571D-4F94-A967-CC0DD2BBA510@.microsoft.com...[vbcol=seagreen]
> My first thought was to use a linked server (where the data I want to
> migrate
> is) and then so something like:
> Insert into newdb..TableA
> Select * from olddb..TableA
> BUT, there are contraints on the new database that cause these to fail. I
> was hoping there was something out there that would script the dropping on
> the indexes and keys on the new db, then the data move, then the creation
> of
> the keys and constraints on the new db.
> I already have a script that will make the old server structure like the
> new...
> "David Gugick" wrote:
|||DB Ghost will do data and schema in a single process.
regards,
Mark Baekdal
http://www.dbghost.com
http://www.innovartis.co.uk
+44 (0)208 241 1762
Build, Comparison and Synchronization from Source Control = Database change
management for SQL Server
"Andrew J. Kelly" wrote:

> There isn't anything that I know of that will do all of that automatically.
> The tool package from www.red-gate.com has a tool to compare the schema and
> one to compare data. I know the schema tool will generate change scripts
> but I don't know if it can generate data change scripts. That's usually
> highly dependant on the schema and business rules.
> --
> Andrew J. Kelly SQL MVP
>
> "Michael @. SGMS" <MichaelSGMS@.discussions.microsoft.com> wrote in message
> news:82853B5B-571D-4F94-A967-CC0DD2BBA510@.microsoft.com...
>
>
|||Mark,
Will it generate a script to reconcile the data differences?
Andrew J. Kelly SQL MVP
"mark baekdal" <markbaekdal@.discussions.microsoft.com> wrote in message
news:69ED6392-ECEC-49B6-BEF3-C4A3C0E9E75A@.microsoft.com...[vbcol=seagreen]
> DB Ghost will do data and schema in a single process.
>
> regards,
> Mark Baekdal
> http://www.dbghost.com
> http://www.innovartis.co.uk
> +44 (0)208 241 1762
> Build, Comparison and Synchronization from Source Control = Database
> change
> management for SQL Server
>
>
> "Andrew J. Kelly" wrote:
|||yes.
regards,
Mark Baekdal
http://www.dbghost.com
http://www.innovartis.co.uk
+44 (0)208 241 1762
Build, Comparison and Synchronization from Source Control = Database change
management for SQL Server
"Andrew J. Kelly" wrote:

> Mark,
> Will it generate a script to reconcile the data differences?
> --
> Andrew J. Kelly SQL MVP
>
> "mark baekdal" <markbaekdal@.discussions.microsoft.com> wrote in message
> news:69ED6392-ECEC-49B6-BEF3-C4A3C0E9E75A@.microsoft.com...
>
>
|||Check out http://www.agileinfollc.com DataStudio, it has a multiple thread
migration engine (20 thgreads) to migrate data.
"Michael @. SGMS" <MichaelSGMS@.discussions.microsoft.com> wrote in message
news:803B71E8-13B9-4F9B-889F-2A2DE65B7DA7@.microsoft.com...
>I have an interesting situation. We are attempting to standardize our data
> models. Our database is large, with over 100 tables, and 50 Gigs of data.
> Our differences between various clients is subtle, but again the volume of
> tables makes it challenging.
> What I am looking for is a tool that will migrate data from an older
> version
> of a database to a top level version of the same database.
> This is SQL Server 2000 SP 3a to SQL Server 2000 SP 3a.
> Does anyone know of such a tool?

Data Migration

I have 2 Tables

Table 1 has

Id

Type

Table 2 has

Id

Type

PartNumber

I need to migrate data from Table 1 to Table 2 based on Type.

Type can be 1 or 2.

Example of Source.

Table 1 has

Id Type

1 1

2 1

3 1

3 2

Table 2 should have 2 records for each Id and based on the Type have different Part Numbers. Part numbers will be the same if in the Source table I had just one record with type 1 and if I had 2 records with type 1 and 2 - Part number will be different.

Example of Destination.

Id Type PartNumber

1 1 10

1 2 10

2 1 11

2 2 11

3 1 12

3 2 13

================================

As you see above

1. I need to duplicate records which had just one type in the Source table.

1 1 10

1 2 10

I will have 2 (1 and 2) types now, but everything else the same for the same id.

In my case I will have Part Numbers as guid. I would create them on my own. They will be the same here.

2. I need to have 2 records as it was before in Source table for id's with 2 types, but with different Part Numbers.

3 1 12

3 2 13

In my case I will have Part Numbers as guid. I would create them on my own.They will be different here.

How do I create 2 records for each Id and certain rules for them? How do I find if there is one or there are two records in Source and how do I apply my rules in looping through records? What control in SSIS tool can I use?

Thanks.

This looks like a job for T-SQL, not SSIS. If you can elaborate on the "Table 2 should have 2 records for each Id and based on the Type have different Part Numbers. Part numbers will be the same if in the Source table I had just one record with type 1 and if I had 2 records with type 1 and 2 - Part number will be different" statement, I may be able to help more. The logic you describe here isn't precise enough to implement (or perhaps I'm just not following it correctly) but this looks like a simple INSERT .. SELECT to me.|||

Can you give me more details? I think you understand correctly.

Look at my examples. It describes what I need.

Table 2 should have 2 records for each Id. Part Number will be the same if Source has just one type = 1. Part Number will be different if the Source has type 1 and 2.

In my case Part Number will be guid (uniqueidentifier) which I need to create on the fly. If Part Number will be the same for 2 records - I would need to use the same guid.

Please let me know if you have more questions.

|||

Vita wrote:

Can you give me more details? I think you understand correctly.

Look at my examples. It describes what I need.

Table 2 should have 2 records for each Id. Part Number will be the same if Source has just one type = 1. Part Number will be different if the Source has type 1 and 2.

In my case Part Number will be guid (uniqueidentifier) which I need to create on the fly. If Part Number will be the same for 2 records - I would need to use the same guid.

Please let me know if you have more questions.

"Part Number will be different" is what was unclear to me. Different in what way? In each case, what should it be?

I think your most recent post provides the needed clarity - let me play with it for a while...

|||

Ok... that was more annoying than I figured - it was the NEWID() function that was giving me trouble, so I needed to use a temp table to get the desired results. Here's the deal:

Code Snippet

USE tempdb

GO

IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Table1]') AND type in (N'U'))

DROP TABLE [dbo].[Table1]

GO

IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Table2]') AND type in (N'U'))

DROP TABLE [dbo].[Table2]

GO

CREATE TABLE Table1

(

Id INT NOT NULL,

Type INT NOT NULL

)

go

CREATE TABLE Table2

(

Id INT NOT NULL,

Type INT NOT NULL,

PartNumber UNIQUEIDENTIFIER NULL

)

GO

INSERT INTO Table1 VALUES (1, 1)

INSERT INTO Table1 VALUES (2, 1)

INSERT INTO Table1 VALUES (3, 1)

INSERT INTO Table1 VALUES (3, 2)

GO

WITH IdExistsOnceCTE

AS

(

SELECT Id

,MIN (Type) AS Type

,NEWID() AS PartNumber

FROM Table1

GROUP BY Id

HAVING COUNT (Type) = 1

)

SELECT IdExistsOnceCTE.*

INTO #ExistsOnce -- This will fail if run more than once in a row, as the temp table will already exist on subsequent executions!

FROM IdExistsOnceCTE;

WITH IdExistsTwiceCTE

AS

(

SELECT Id

,Type

,NEWID() AS PartNumber

FROM Table1

WHERE Id NOT IN ( SELECT Id FROM #ExistsOnce )

),

DummyDuplicationCTE

AS

(

SELECT 1 AS Dup

UNION ALL

SELECT 2

)

INSERT INTO Table2

SELECT #ExistsOnce.*

FROM #ExistsOnce

CROSS JOIN DummyDuplicationCTE

UNION ALL

SELECT *

FROM IdExistsTwiceCTE

ORDER BY 1, 2;

SELECT *

FROM Table2;

The final SELECT statement is a verification of the contents of Table2, and based on the inputs you provided, produces this output:

Code Snippet

Id Type PartNumber

-- --

1 1 8A26BF6E-21D6-4977-AFAE-BA8CB0B22C6D

1 1 8A26BF6E-21D6-4977-AFAE-BA8CB0B22C6D

2 1 C1DCDCCE-4FCD-4B30-98BA-1D003F3B23F1

2 1 C1DCDCCE-4FCD-4B30-98BA-1D003F3B23F1

3 1 38070FA6-0805-45C3-8503-58F8A9EDF473

3 2 26AE5281-149C-47C2-9AE5-4E1F42228EB

Does this give you what you need?

|||

Thank you for your hard work.

The only thing I see wrong is the type.

You got

Id Type PartNumber

-- --

1 1 8A26BF6E-21D6-4977-AFAE-BA8CB0B22C6D

1 1 8A26BF6E-21D6-4977-AFAE-BA8CB0B22C6D

2 1 C1DCDCCE-4FCD-4B30-98BA-1D003F3B23F1

2 1 C1DCDCCE-4FCD-4B30-98BA-1D003F3B23F1

3 1 38070FA6-0805-45C3-8503-58F8A9EDF473

3 2 26AE5281-149C-47C2-9AE5-4E1F42228EB

and I need

Id Type PartNumber

-- --

1 1 8A26BF6E-21D6-4977-AFAE-BA8CB0B22C6D

1 2 8A26BF6E-21D6-4977-AFAE-BA8CB0B22C6D

2 1 C1DCDCCE-4FCD-4B30-98BA-1D003F3B23F1

2 2 C1DCDCCE-4FCD-4B30-98BA-1D003F3B23F1

3 1 38070FA6-0805-45C3-8503-58F8A9EDF473

3 2 26AE5281-149C-47C2-9AE5-4E1F42228EB

Type always needs to be 1 and 2.

Everything else looks great.

|||

Here you go:

Code Snippet

WITH IdExistsOnceCTE

AS

(

SELECT Id

,MIN (Type) AS Type

,NEWID() AS PartNumber

FROM Table1

GROUP BY Id

HAVING COUNT (Type) = 1

)

SELECT IdExistsOnceCTE.*

INTO #ExistsOnce -- This will fail if run more than once in a row, as the temp table will already exist on subsequent executions!

FROM IdExistsOnceCTE;

WITH IdExistsTwiceCTE

AS

(

SELECT Id

,Type

,NEWID() AS PartNumber

FROM Table1

WHERE Id NOT IN ( SELECT Id FROM #ExistsOnce )

),

DummyDuplicationCTE

AS

(

SELECT 1 AS Dup

UNION ALL

SELECT 2

)

INSERT INTO Table2

SELECT Id

,Dup

,PartNumber

FROM #ExistsOnce

CROSS JOIN DummyDuplicationCTE

UNION ALL

SELECT *

FROM IdExistsTwiceCTE

ORDER BY 1, 2;

SELECT *

FROM Table2;

This yields:

Code Snippet

Id Type PartNumber

-- --

1 1 7578AB32-6F38-4D23-887B-D81DD9D71DBC

1 2 7578AB32-6F38-4D23-887B-D81DD9D71DBC

2 1 F554FFA0-B0DB-4C53-9D3F-181678617A9B

2 2 F554FFA0-B0DB-4C53-9D3F-181678617A9B

3 1 165DEB39-D6DA-4DD3-B577-DD295BAF7455

3 2 55389A91-EAF6-4360-BFAA-10032EA1AB67

|||Thank you very much!

data migration

Hi all.
I need to to a lot of data migration from excel spreadsheets, access
databases, and SQL Server databases into a SQL Server database.
In the past I would have achieved this using a combination of linked
servers and ad hoc connections using opendatasource(). However my new
site is having trouble with Microsoft.Jet.OLEDB.4.0 and I'm worried
that I may have to come up with a different way to do the migration.
Can anyone let me know any other ways (other than linked servers and
DTS - which is not flexible enough for my situation here) that I can
get data from Excel and Access into SQL Server?
No idea is too simple.
Thanks,
Josh.
Hi Josh,
Well I tend to create insert statements in the excel sheet or access query.
That always works for me
Statement formula's usually look like
="insert into Table(Col1,Col2,Col3) values('" & a1 & "'," & b1 & ",'" & c1 &
"')"
then I copy the values into notepad and then into ISQL
I hope this helps
Greg O
Looking for SQL Scripts, Need Help? http://www.SQL-Scripts.com
Document all your databases now.
http://www.ag-software.com/ags_scribe_index.aspx
Crosstab queries from SQL Server.
http://www.ag-software.com/xp_ags_crosstab.aspx
Edit Extended Properties as easy as can be.
http://www.ag-software.com/ags_SSEPE_index.aspx
"Josh White" <whitegoose@.inorbit.com> wrote in message
news:aafea0a8.0404201812.196002df@.posting.google.c om...
> Hi all.
> I need to to a lot of data migration from excel spreadsheets, access
> databases, and SQL Server databases into a SQL Server database.
> In the past I would have achieved this using a combination of linked
> servers and ad hoc connections using opendatasource(). However my new
> site is having trouble with Microsoft.Jet.OLEDB.4.0 and I'm worried
> that I may have to come up with a different way to do the migration.
> Can anyone let me know any other ways (other than linked servers and
> DTS - which is not flexible enough for my situation here) that I can
> get data from Excel and Access into SQL Server?
> No idea is too simple.
> Thanks,
> Josh.
|||Hi,
Easiest and fastest way is using DTS to move data from Access and Excel. If
DTS is really not feasible then save the Access / Excel files to a comma
seperated csv / text file and then use "BULK INSERT" to load the data into
SQL Server.
Thanks
Hari
MCDBA
"Josh White" <whitegoose@.inorbit.com> wrote in message
news:aafea0a8.0404201812.196002df@.posting.google.c om...
> Hi all.
> I need to to a lot of data migration from excel spreadsheets, access
> databases, and SQL Server databases into a SQL Server database.
> In the past I would have achieved this using a combination of linked
> servers and ad hoc connections using opendatasource(). However my new
> site is having trouble with Microsoft.Jet.OLEDB.4.0 and I'm worried
> that I may have to come up with a different way to do the migration.
> Can anyone let me know any other ways (other than linked servers and
> DTS - which is not flexible enough for my situation here) that I can
> get data from Excel and Access into SQL Server?
> No idea is too simple.
> Thanks,
> Josh.

Data Migration

Can I get some advice guys?
We have a de-normalized database that is currently fed from Siebel
(CRM). We want to pump in data from Pivotal but there is a slight
problem.
We used the Siebel Table=E2=80=99s identity column for PK in each
corresponding table (contact, account, etc). The record is then pushed
to our Data Warehouse and used for the corresponding dimension=E2=80=99s PK.
My original thought was to simply multiply Pivotal=E2=80=99s identity column
by something like 100,000,000 in the ETL and just pump the data in.
After taking my first look at Pivotal, I noticed that Pivotal does not
use identity columns. Pivotal uses a binary field to uniquely identify
each record.
This is a real mess and I get to be the one to fix it! =EF=81=8C.
We have several cubes and reports in production so I want to try and
minimize the overall impact.
Can you guys help me out with damage control and point me in the right
direction?I don't see a problem. You should be able to generate a new surrogate
key on insert, either in the ETL process or when you load the
dimension. In fact it's conventional practice always to allocate new
surrogate keys in the data warehouse. Of course the surrogate should
never be the only key in a dimension so there should be no problem
uniquely identifying the data row. But maybe I misunderstood what you
are asking.
David Portas
SQL Server MVP
--|||You are going to have to change how your keys are generated, and stop using
the Siebel key in your own DB.
Suggest generating new surrogate keys in your DB for Both Siebel and
Pivotal, and keep the key "mapping" in a separate table,
Create Table KeyMap (SiebelPK Integer Null, PivotalPK Binary(xx) Null,
NewPK Primary Key Identity Integer Not Null)
And then Populate it with all the current existing Siebel records to start
off with:
Set Identity_Insert KeyMap On
Insert KeyMap(SiebelKey, NewPK)
Select Distinct SiebelKey, SiebelKey
From ExistingTable
Change your import process to insert into the key map first, then use
created Identity for the insert into the rest of your data tables...
"daveg.01@.gmail.com" wrote:

> Can I get some advice guys?
> We have a de-normalized database that is currently fed from Siebel
> (CRM). We want to pump in data from Pivotal but there is a slight
> problem.
> We used the Siebel Table’s identity column for PK in each
> corresponding table (contact, account, etc). The record is then pushed
> to our Data Warehouse and used for the corresponding dimension’s PK.
>
> My original thought was to simply multiply Pivotal’s identity column
> by something like 100,000,000 in the ETL and just pump the data in.
> After taking my first look at Pivotal, I noticed that Pivotal does not
> use identity columns. Pivotal uses a binary field to uniquely identify
> each record.
> This is a real mess and I get to be the one to fix it! ?.
> We have several cubes and reports in production so I want to try and
> minimize the overall impact.
> Can you guys help me out with damage control and point me in the right
> direction?
>|||Oh, DOn;t forget to turn off Identity_Insert after you insert all the
existing Siebel Records...
Set Identity_Insert KeyMap Off
"daveg.01@.gmail.com" wrote:

> Can I get some advice guys?
> We have a de-normalized database that is currently fed from Siebel
> (CRM). We want to pump in data from Pivotal but there is a slight
> problem.
> We used the Siebel Table’s identity column for PK in each
> corresponding table (contact, account, etc). The record is then pushed
> to our Data Warehouse and used for the corresponding dimension’s PK.
>
> My original thought was to simply multiply Pivotal’s identity column
> by something like 100,000,000 in the ETL and just pump the data in.
> After taking my first look at Pivotal, I noticed that Pivotal does not
> use identity columns. Pivotal uses a binary field to uniquely identify
> each record.
> This is a real mess and I get to be the one to fix it! ?.
> We have several cubes and reports in production so I want to try and
> minimize the overall impact.
> Can you guys help me out with damage control and point me in the right
> direction?
>|||Oh, Don't Forget to tuen off the Identity_Insert after you insert all the
existing Siebel Records..
Set Identity_Insert KeyMap Off
"daveg.01@.gmail.com" wrote:

> Can I get some advice guys?
> We have a de-normalized database that is currently fed from Siebel
> (CRM). We want to pump in data from Pivotal but there is a slight
> problem.
> We used the Siebel Table’s identity column for PK in each
> corresponding table (contact, account, etc). The record is then pushed
> to our Data Warehouse and used for the corresponding dimension’s PK.
>
> My original thought was to simply multiply Pivotal’s identity column
> by something like 100,000,000 in the ETL and just pump the data in.
> After taking my first look at Pivotal, I noticed that Pivotal does not
> use identity columns. Pivotal uses a binary field to uniquely identify
> each record.
> This is a real mess and I get to be the one to fix it! ?.
> We have several cubes and reports in production so I want to try and
> minimize the overall impact.
> Can you guys help me out with damage control and point me in the right
> direction?
>|||Thanks for the suggestion!
It sounds like that is exacally what I need to do.

Data Migration

Hi friends,
In my project I have redesigned my database structure. In the existing
structure there is no Primary key and no relationship b/w data.
In the new structure Primary key and the relationship is added.
Now I want to migrate the existing data into the new structure.
There is a chance for duplicate records and also records that does not
satisfy referential integrity.
How to migrate the data? I want to have a copy of the duplicate records and
also the records which does not satisfy referential integrity.
Its a huge database, so i can't query table by table to find the mismatch
records.
How to proceed?
thanks
vanithaHi,
Let me know can you consider the following:
I assume that you have two database one is normalized and other one is not
normialized.
1. Backup the Both Databases.
2. Add err_text column to all tables in non-normolized DB.
3. Write a store procedure with the following logic:
a. Select a record and try inserting to Normolized DBs table ,
if no error occurs delete the same from Non-Normolized table,
otherwise
updated err_text column of respective table in non-normolized db
with the
error
b. perform step (a) for each record in every table in non-normlized DB.
4. After the step3 what ever record exists in table of non-normolized db
either they
are passed Primary or referential integrity.|||Vanitha
There are many approaches to this. some are very trickey too.
1)One thing that you can easily do is to update new tables and compare with
old one find unmatched rows using pimary key.( u can use dynamic query to
supply table names while comparing)
2) You can use sp_MSforeachtble: You have to write something dynamically to
abtain all duplicate rows
3) write a procedure (using cursor perhaps) to take columns from
information.schema.columns where for each table and supply those columns to
find out duplicat rows using group by and count(*) >1 as you usually do.
4) Use a sproc to insert row by row to another table and insert into
errortable whereever there is an error using @.@.error <>. This approach gives
coplete rows for which inserts failed.
but 2 and 3 only gives duplicate rows and does not check other constriants
Hope This will give you a lead. though does not solve
Regards
R.D
"Vanitha" wrote:

> Hi friends,
> In my project I have redesigned my database structure. In the existing
> structure there is no Primary key and no relationship b/w data.
> In the new structure Primary key and the relationship is added.
> Now I want to migrate the existing data into the new structure.
> There is a chance for duplicate records and also records that does not
> satisfy referential integrity.
> How to migrate the data? I want to have a copy of the duplicate records an
d
> also the records which does not satisfy referential integrity.
> Its a huge database, so i can't query table by table to find the mismatch
> records.
> How to proceed?
> thanks
> vanitha
>|||I can't add a column to the existing table.
Can I add a new table into the existing database called "errorTable" and the
n
insert the table_name, column data and the error description.
pls advice on this.
thanks
"SlowLearner" wrote:

> Hi,
> Let me know can you consider the following:
> I assume that you have two database one is normalized and other one is not
> normialized.
> 1. Backup the Both Databases.
> 2. Add err_text column to all tables in non-normolized DB.
> 3. Write a store procedure with the following logic:
> a. Select a record and try inserting to Normolized DBs table ,
> if no error occurs delete the same from Non-Normolized table,
> otherwise
> updated err_text column of respective table in non-normolized db
> with the
> error
> b. perform step (a) for each record in every table in non-normlized DB
.
> 4. After the step3 what ever record exists in table of non-normolized db
> either they
> are passed Primary or referential integrity.
>|||there are millions of records, so I can't select each and every record and
check the errors.
I want to do this in bulk.
pls help me to solve this.
thanks
vanitha
"SlowLearner" wrote:

> Hi,
> Let me know can you consider the following:
> I assume that you have two database one is normalized and other one is not
> normialized.
> 1. Backup the Both Databases.
> 2. Add err_text column to all tables in non-normolized DB.
> 3. Write a store procedure with the following logic:
> a. Select a record and try inserting to Normolized DBs table ,
> if no error occurs delete the same from Non-Normolized table,
> otherwise
> updated err_text column of respective table in non-normolized db
> with the
> error
> b. perform step (a) for each record in every table in non-normlized DB
.
> 4. After the step3 what ever record exists in table of non-normolized db
> either they
> are passed Primary or referential integrity.
>|||why dont you create a new table or temp table with the same syntax and add
all uninserted columns data to new error table
Regards
R.D
"R.D" wrote:
> Vanitha
> There are many approaches to this. some are very trickey too.
> 1)One thing that you can easily do is to update new tables and compare wi
th
> old one find unmatched rows using pimary key.( u can use dynamic query to
> supply table names while comparing)
> 2) You can use sp_MSforeachtble: You have to write something dynamically t
o
> abtain all duplicate rows
> 3) write a procedure (using cursor perhaps) to take columns from
> information.schema.columns where for each table and supply those columns t
o
> find out duplicat rows using group by and count(*) >1 as you usually do.
> 4) Use a sproc to insert row by row to another table and insert into
> errortable whereever there is an error using @.@.error <>. This approach giv
es
> coplete rows for which inserts failed.
> but 2 and 3 only gives duplicate rows and does not check other constriants
> Hope This will give you a lead. though does not solve
> Regards
> R.D
>
> "Vanitha" wrote:
>|||There are n number of records... so i can't chk record by record.
is there any way to do in bulk. and if any error during the operation, the
migration shd not be cancelled, it shd log in another table or something lik
e
that.
thanks
vanitha
"R.D" wrote:
> Vanitha
> There are many approaches to this. some are very trickey too.
> 1)One thing that you can easily do is to update new tables and compare wi
th
> old one find unmatched rows using pimary key.( u can use dynamic query to
> supply table names while comparing)
> 2) You can use sp_MSforeachtble: You have to write something dynamically t
o
> abtain all duplicate rows
> 3) write a procedure (using cursor perhaps) to take columns from
> information.schema.columns where for each table and supply those columns t
o
> find out duplicat rows using group by and count(*) >1 as you usually do.
> 4) Use a sproc to insert row by row to another table and insert into
> errortable whereever there is an error using @.@.error <>. This approach giv
es
> coplete rows for which inserts failed.
> but 2 and 3 only gives duplicate rows and does not check other constriants
> Hope This will give you a lead. though does not solve
> Regards
> R.D
>
> "Vanitha" wrote:
>|||TRY THIS
CAN YOU JUST INSERT INTO NEW TABLE AND COMPARE WITH OLD TABLE SO THAT YOU
KNOW WHAT RECORDS ARE NOT INSERTED
this is like this
1) add indentity column in the old table and new table
2) insert into new table2( normalised)
3) insert rows in an order
4) find out which rows are not available in new table using somethinglike
SELECT * FROM TABLE1 LEFT OUTER JOIN TABLE2 ON TABLE.IDENTITY =
TABLE2.IDENTITY WHERE TABLE2.IDENTITY IS NULL
I think this is simple and possible
Regards
R.D
REGARDS
R.D
"Vanitha" wrote:
> There are n number of records... so i can't chk record by record.
> is there any way to do in bulk. and if any error during the operation, the
> migration shd not be cancelled, it shd log in another table or something l
ike
> that.
> thanks
> vanitha
> "R.D" wrote:
>|||OOPS
IN THE SECOND TABLE IDENTIY COLUMN SHOULD ALSO BE INSERTED EXLICITLY OR MAKE
IT A NUMERIC.
sorry there is caps lock pressed on my key board
R.D
"Vanitha" wrote:
> There are n number of records... so i can't chk record by record.
> is there any way to do in bulk. and if any error during the operation, the
> migration shd not be cancelled, it shd log in another table or something l
ike
> that.
> thanks
> vanitha
> "R.D" wrote:
>|||thanks a lot
if any error exits during migration, then it stops the migration process.
how to continue the migration even if error occurs.
thanks
vanitha
"R.D" wrote:
> OOPS
> IN THE SECOND TABLE IDENTIY COLUMN SHOULD ALSO BE INSERTED EXLICITLY OR MA
KE
> IT A NUMERIC.
> sorry there is caps lock pressed on my key board
> R.D
> "Vanitha" wrote:
>

Data Migration

Hi friends,
I have changed the existing database scripts by adding primary key and
foreign key constraints.
Now i have created the new structure, I want to mograte the existing data
into the new structure. There is a chance for duplicate records and also
records that does not satisfy referential integrity.
How to create a populate data from the existing DB and how to avoid the
errors?
thanks a lot
vanithaYou have to eliminate the duplicate records, do you want to delete and
discard them, or do you want to keep them ? You have to provide further
informatione / DDL to help you.
Hth, jens Suessmeyer.
http://www.sqlserver2005.de
--|||hi vanitha,
--You can use this code to scan duplicate keys
use northwind
select orderid from [order details]
group by orderid
having count(orderid)>1
thanks,
Jose de Jesus Jr. Mcp,Mcdba
Data Architect
Sykes Asia (Manila philippines)
MCP #2324787
"Vanitha" wrote:

> Hi friends,
> I have changed the existing database scripts by adding primary key and
> foreign key constraints.
> Now i have created the new structure, I want to mograte the existing data
> into the new structure. There is a chance for duplicate records and also
> records that does not satisfy referential integrity.
> How to create a populate data from the existing DB and how to avoid the
> errors?
> thanks a lot
> vanitha|||How to generate the populate script?
thanks
vanitha
"Jose G. de Jesus Jr MCP, MCDBA" wrote:
> hi vanitha,
> --You can use this code to scan duplicate keys
> use northwind
> select orderid from [order details]
> group by orderid
> having count(orderid)>1
> --
> thanks,
> --
> Jose de Jesus Jr. Mcp,Mcdba
> Data Architect
> Sykes Asia (Manila philippines)
> MCP #2324787
>
> "Vanitha" wrote:
>|||Hi
You may want to check the data first to see if this does apply:
e.g. for duplicates
SELECT A.col1, A.col2, A.col3
FROM MyTableA
GROUP BY A.col1, A.col2, A.col3
HAVING COUNT(*) > 1
For a foreign key try something like:
SELECT A.col1, A.col2, A.col3
FROM MyTableA A
WHERE NOT EXISTS ( SELECT * FROM FKTable F WHERE A.col2 = F.col1 )
If you want to eliminate the duplicates when inserting the data use a
DISTINCT clause
INSERT INTO NewTable ( col1, col2, col3 )
SELECT DISTINCT A.col1, A.col2, A.col3
FROM MyTableA
To eliminate those that do not have FKs
INSERT INTO NewTable ( col1, col2, col3 )
SELECT A.col1, A.col2, A.col3
FROM MyTableA A
WHERE EXISTS ( SELECT * FROM FKTable F WHERE A.col2 = F.col1 )
John
"Vanitha" <Vanitha@.discussions.microsoft.com> wrote in message
news:DA29DC9C-1109-491C-9970-E69EBAAB16AC@.microsoft.com...
> Hi friends,
> I have changed the existing database scripts by adding primary key and
> foreign key constraints.
> Now i have created the new structure, I want to mograte the existing data
> into the new structure. There is a chance for duplicate records and also
> records that does not satisfy referential integrity.
> How to create a populate data from the existing DB and how to avoid the
> errors?
> thanks a lot
> vanitha|||insert into destination(orderid,x,y,z) --> this are the insert hint
select orderid,x,y,z from orderdetails -->inserted must match
where orderid not in
(
select orderid from [order details]
group by orderid
having count(orderid)>1
)
process those that are duplicate and insert it afterwards
thanks,
Jose de Jesus Jr. Mcp,Mcdba
Data Architect
Sykes Asia (Manila philippines)
MCP #2324787
"Vanitha" wrote:
> How to generate the populate script?
> thanks
> vanitha
> "Jose G. de Jesus Jr MCP, MCDBA" wrote:
>

Data migration

Dear all
I want to know if sql server 2000 supports 2-bytes language, say Chinese? I
have an Excel file with Chinese characters in it, after imported into sql
server table using dts, all Chinese become question mark(?).
How should I do?
--
Best regards,
RichardSee my reply in .dts
--
--
Allan Mitchell MCSE,MCDBA, (Microsoft SQL Server MVP)
www.allisonmitchell.com - Expert SQL Server Consultancy.
www.SQLDTS.com - The site for all your DTS needs.
I support PASS - the definitive, global community
for SQL Server professionals - http://www.sqlpass.org
"lurc" <lurchot@.hotmail.com> wrote in message
news:uSsMTkazDHA.540@.tk2msftngp13.phx.gbl...
> Dear all
> I want to know if sql server 2000 supports 2-bytes language, say Chinese?
I
> have an Excel file with Chinese characters in it, after imported into sql
> server table using dts, all Chinese become question mark(?).
> How should I do?
> --
> Best regards,
> Richard
>

Data migration

I have SQL 2K (SP3) box with 2 partitions and all of the databases are on
the first one (which is very small), and that is nearing 100% capacity. What
is the best way to point/move the databases to the second parition?
tia,
Garyhmm, should I just detach, move the .mdf and .ldf files to the second
partition and then atach it back? Would that cause any problems with the
front end?
-Gary
"Gary" <gary.rey@.NOOOOSPAM.virtium.com> wrote in message
news:OdOblJPbDHA.1740@.TK2MSFTNGP10.phx.gbl...
> I have SQL 2K (SP3) box with 2 partitions and all of the databases are on
> the first one (which is very small), and that is nearing 100% capacity.
What
> is the best way to point/move the databases to the second parition?
> tia,
> Gary
>

data migration

Hi all.
I need to to a lot of data migration from excel spreadsheets, access
databases, and SQL Server databases into a SQL Server database.
In the past I would have achieved this using a combination of linked
servers and ad hoc connections using opendatasource(). However my new
site is having trouble with Microsoft.Jet.OLEDB.4.0 and I'm worried
that I may have to come up with a different way to do the migration.
Can anyone let me know any other ways (other than linked servers and
DTS - which is not flexible enough for my situation here) that I can
get data from Excel and Access into SQL Server?
No idea is too simple.
Thanks,
Josh.Hi Josh,
Well I tend to create insert statements in the excel sheet or access query.
That always works for me
Statement formula's usually look like
="insert into Table(Col1,Col2,Col3) values('" & a1 & "'," & b1 & ",'" & c1 &
"')"
then I copy the values into notepad and then into ISQL
I hope this helps
--
Greg O
Looking for SQL Scripts, Need Help? http://www.SQL-Scripts.com
Document all your databases now.
http://www.ag-software.com/ags_scribe_index.aspx
Crosstab queries from SQL Server.
http://www.ag-software.com/xp_ags_crosstab.aspx
Edit Extended Properties as easy as can be.
http://www.ag-software.com/ags_SSEPE_index.aspx
"Josh White" <whitegoose@.inorbit.com> wrote in message
news:aafea0a8.0404201812.196002df@.posting.google.com...
> Hi all.
> I need to to a lot of data migration from excel spreadsheets, access
> databases, and SQL Server databases into a SQL Server database.
> In the past I would have achieved this using a combination of linked
> servers and ad hoc connections using opendatasource(). However my new
> site is having trouble with Microsoft.Jet.OLEDB.4.0 and I'm worried
> that I may have to come up with a different way to do the migration.
> Can anyone let me know any other ways (other than linked servers and
> DTS - which is not flexible enough for my situation here) that I can
> get data from Excel and Access into SQL Server?
> No idea is too simple.
> Thanks,
> Josh.|||Hi,
Easiest and fastest way is using DTS to move data from Access and Excel. If
DTS is really not feasible then save the Access / Excel files to a comma
seperated csv / text file and then use "BULK INSERT" to load the data into
SQL Server.
Thanks
Hari
MCDBA
"Josh White" <whitegoose@.inorbit.com> wrote in message
news:aafea0a8.0404201812.196002df@.posting.google.com...
> Hi all.
> I need to to a lot of data migration from excel spreadsheets, access
> databases, and SQL Server databases into a SQL Server database.
> In the past I would have achieved this using a combination of linked
> servers and ad hoc connections using opendatasource(). However my new
> site is having trouble with Microsoft.Jet.OLEDB.4.0 and I'm worried
> that I may have to come up with a different way to do the migration.
> Can anyone let me know any other ways (other than linked servers and
> DTS - which is not flexible enough for my situation here) that I can
> get data from Excel and Access into SQL Server?
> No idea is too simple.
> Thanks,
> Josh.

Data Matching

Dear All,
Firstly I know my question may be not perfectly fit in this newsgroup, but I
hope I find some assistance or guidance,
we have the customers data in different 3 systems (CRM,ERP, another Custom
developed system) , we are now going to assign a unique customer id across
all these systems, all these systems use MS SQL as data store, however, the
problem is there is no common fields across these systems else the name and
address, but of course the names are not typed exactly on each different
system, is there any tools I can use to make intelligent matching (using
Phonetics , similarities ,...)
I'll be so grateful for your support.
Regards,
Mohamed
SQL FTS is not the ideal tool for this.
You need something like http://www.name-searching.com/Correct_Address.html
to identify duplicates, either that or write some custom scripts to do
pattern matching, possibly using the LIKE or equality operator.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Mohamed Zaki" <m_o_z_a_k_i@.link.net> wrote in message
news:%23qEnFBn9EHA.3484@.TK2MSFTNGP10.phx.gbl...
> Dear All,
> Firstly I know my question may be not perfectly fit in this newsgroup, but
> I hope I find some assistance or guidance,
> we have the customers data in different 3 systems (CRM,ERP, another Custom
> developed system) , we are now going to assign a unique customer id across
> all these systems, all these systems use MS SQL as data store, however,
> the problem is there is no common fields across these systems else the
> name and address, but of course the names are not typed exactly on each
> different system, is there any tools I can use to make intelligent
> matching (using Phonetics , similarities ,...)
> I'll be so grateful for your support.
> Regards,
> Mohamed
>
|||Mohamed,
While not directly related to Full Text Search (FTS), this subject of
matching similar data or data cleansing has been discussed previously in
this newsgroup. While the below links are related to SQL Server 2005 (Yukon)
as they are new features in this beta version of SQL Server, that might be
helpful to you:
"Fuzzy Lookup and Fuzzy Grouping in Data Transformation Services for SQL
Server 2005" at:
http://msdn.microsoft.com/library/de...FzDTSSQL05.asp
Solving Business Problems with SQL Server 2005 Data Mining
http://blogs.msdn.com/tims/archive/2...17/186100.aspx
I will also be blogging on this subject of similarity search using both SQL
Server 2000 and 2005 in future blogs.
Regards,
John
SQL Full Text Search Blog
http://spaces.msn.com/members/jtkane/
"Mohamed Zaki" <m_o_z_a_k_i@.link.net> wrote in message
news:#qEnFBn9EHA.3484@.TK2MSFTNGP10.phx.gbl...
> Dear All,
> Firstly I know my question may be not perfectly fit in this newsgroup, but
I
> hope I find some assistance or guidance,
> we have the customers data in different 3 systems (CRM,ERP, another Custom
> developed system) , we are now going to assign a unique customer id across
> all these systems, all these systems use MS SQL as data store, however,
the
> problem is there is no common fields across these systems else the name
and
> address, but of course the names are not typed exactly on each different
> system, is there any tools I can use to make intelligent matching (using
> Phonetics , similarities ,...)
> I'll be so grateful for your support.
> Regards,
> Mohamed
>

Data Mart rollbacks

How have folks been managing rollbacks on failures inside SSIS when populating data marts?
For example - we have a seperate package for each dimension table, then a master Fact table update. If one of the dimension table updates fails - how have you rolled back the previous changes in the tables updated prior to the failure - or if the Fact tabel package fails - how do you manage rollback in all the dimension tables?
My first thought was using the Audit table information to determine which tables needed rolled back.
Hello Joe,
What about putting the Tasks (Execute Package tasks) in a transaction?
http://msdn2.microsoft.com/en-us/library/ms137690(SQL.90).aspx

Allan Mitchell
http://wiki.sqlis.com | http://www.sqlis.com | http://www.sqldts.com |
http://www.konesans.com

> How have folks been managing rollbacks on failures inside SSIS when
> populating data marts?
> For example - we have a seperate package for each dimension table,
> then a master Fact table update. If one of the dimension table
> updates fails - how have you rolled back the previous changes in the
> tables updated prior to the failure - or if the Fact tabel package
> fails - how do you manage rollback in all the dimension tables?
> My first thought was using the Audit table information to determine
> which tables needed rolled back.
>
|||Is this what your team would implement?
"Allan Mitchell" <allan@.no-spam.sqldts.com> wrote in message
news:885683c261f8c97ff6e74166f0@.news.microsoft.com ...
> Hello Joe,
> What about putting the Tasks (Execute Package tasks) in a transaction?
> http://msdn2.microsoft.com/en-us/library/ms137690(SQL.90).aspx
>
> --
> Allan Mitchell
> http://wiki.sqlis.com | http://www.sqlis.com | http://www.sqldts.com |
> http://www.konesans.com
>
>
|||Hello Joe,
Yes. I would be looking to put things inside of transactions. I may logically
split things up but yes transactions would be the way for me
Allan Mitchell
http://wiki.sqlis.com | http://www.sqlis.com | http://www.sqldts.com |
http://www.konesans.com
[vbcol=seagreen]
> Is this what your team would implement?
> "Allan Mitchell" <allan@.no-spam.sqldts.com> wrote in message
> news:885683c261f8c97ff6e74166f0@.news.microsoft.com ...
|||the easy way:
backup the DB, execute the ETLs, restore the DB in case of a problem...
in fact, its a big recommendation to always backup first, so there is no overhead here.
"Joe" <hortoristic@.gmail.dot.com> wrote in message news:D6985AF3-799E-4B96-8A87-7A823C9C1FC2@.microsoft.com...
How have folks been managing rollbacks on failures inside SSIS when populating data marts?
For example - we have a seperate package for each dimension table, then a master Fact table update. If one of the dimension table updates fails - how have you rolled back the previous changes in the tables updated prior to the failure - or if the Fact tabel package fails - how do you manage rollback in all the dimension tables?
My first thought was using the Audit table information to determine which tables needed rolled back.

Data Mart rollbacks

How have folks been managing rollbacks on failures inside SSIS when populati
ng data marts?
For example - we have a seperate package for each dimension table, then a ma
ster Fact table update. If one of the dimension table updates fails - how h
ave you rolled back the previous changes in the tables updated prior to the
failure - or if the Fact tabel package fails - how do you manage rollback in
all the dimension tables?
My first thought was using the Audit table information to determine which ta
bles needed rolled back.Hello Joe,
What about putting the Tasks (Execute Package tasks) in a transaction?
http://msdn2.microsoft.com/en-us/library/ms137690(SQL.90).aspx
Allan Mitchell
http://wiki.sqlis.com | http://www.sqlis.com | http://www.sqldts.com |
http://www.konesans.com

> How have folks been managing rollbacks on failures inside SSIS when
> populating data marts?
> For example - we have a seperate package for each dimension table,
> then a master Fact table update. If one of the dimension table
> updates fails - how have you rolled back the previous changes in the
> tables updated prior to the failure - or if the Fact tabel package
> fails - how do you manage rollback in all the dimension tables?
> My first thought was using the Audit table information to determine
> which tables needed rolled back.
>|||Is this what your team would implement?
"Allan Mitchell" <allan@.no-spam.sqldts.com> wrote in message
news:885683c261f8c97ff6e74166f0@.news.microsoft.com...
> Hello Joe,
> What about putting the Tasks (Execute Package tasks) in a transaction?
> http://msdn2.microsoft.com/en-us/library/ms137690(SQL.90).aspx
>
> --
> Allan Mitchell
> http://wiki.sqlis.com | http://www.sqlis.com | http://www.sqldts.com |
> http://www.konesans.com
>
>|||Hello Joe,
Yes. I would be looking to put things inside of transactions. I may logica
lly
split things up but yes transactions would be the way for me
--
Allan Mitchell
http://wiki.sqlis.com | http://www.sqlis.com | http://www.sqldts.com |
http://www.konesans.com
[vbcol=seagreen]
> Is this what your team would implement?
> "Allan Mitchell" <allan@.no-spam.sqldts.com> wrote in message
> news:885683c261f8c97ff6e74166f0@.news.microsoft.com...
>|||the easy way:
backup the DB, execute the ETLs, restore the DB in case of a problem...
in fact, its a big recommendation to always backup first, so there is no ove
rhead here.
"Joe" <hortoristic@.gmail.dot.com> wrote in message news:D6985AF3-799E-4B96-8
A87-7A823C9C1FC2@.microsoft.com...
How have folks been managing rollbacks on failures inside SSIS when populati
ng data marts?
For example - we have a seperate package for each dimension table, then a ma
ster Fact table update. If one of the dimension table updates fails - how h
ave you rolled back the previous changes in the tables updated prior to the
failure - or if the Fact tabel package fails - how do you manage rollback in
all the dimension tables?
My first thought was using the Audit table information to determine which ta
bles needed rolled back.

Data Mart Maintenance

The database maintenance on our data mart SQL server is causing problems.The re-index process causes too much file growth and the processes are unable to complete due to disk space constraints.I want to write a process to re-index the database, but handle this a single file group at a time and then shrink the file group before going to the next one. I'm looking to write a maintenance routine (stored procedure) that will get all of the tables from a file group and re-index them. Any general ideas would be greatly appreciated.

I think I've found a reindexing solution to this problem I posted. I'm posting an answer here and if anyone wants to comment thats fine but I am doing it to help out other developers out there. If you scroll down farther you can see that I only reindexed clustered indexes in this Database and that is because when you reindex clustered indexes, tables with non-clustered indexes are automatically reindexed as well I believe.

-- cursor to loop through File Groups

DECLARE FG CURSOR FOR

SELECT DISTINCT groupid

FROM sysfilegroups

OPEN FG

FETCH NEXT FROM FG INTO @.fgroupid

WHILE @.@.Fetch_status=0

BEGIN

-- Cursor to loop through filenames

DECLARE FILENAME CURSOR FOR

SELECT DISTINCT f.name

FROM sysfiles f

INNER JOIN sysfilegroups fg ON f.groupid = @.fgroupid

OPEN FILENAME

FETCH NEXT FROM FILENAME INTO @.fname

WHILE @.@.Fetch_status=0

BEGIN

PRINT 'Current Filename = ' + @.fname

--Table Name Cursor

DECLARE TBL CURSOR FOR

--This query selects tables with clustered indexes only

SELECT DISTINCT 'DBCC DBREINDEX (' + i.TABLE_NAME + ')'

FROM INFORMATION_SCHEMA.TABLES i

INNER JOIN sysindexes si ON i.TABLE_NAME = object_name(si.id)

INNER JOIN sysfiles sf ON sf.groupid = si.groupid

WHERE objectProperty(object_id(i.TABLE_NAME), 'IsUserTable') = 1

AND objectProperty(object_id(i.TABLE_NAME), 'TableHasClustIndex')=1

AND sf.filename=@.fname

OPEN TBL

FETCH NEXT FROM TBL INTO @.tname

WHILE @.@.Fetch_status=0

BEGIN

PRINT @.SQLA

EXEC (@.SQLA)

FETCH NEXT FROM TBL INTO @.SQLA

END

CLOSE TBL

DEALLOCATE TBL

FETCH NEXT FROM FILENAME INTO @.fname

END

SELECT @.SQLB = 'DBCC SHRINKFILE ('+ @.fname+ ')'

EXEC (@.SQLB)

CLOSE FILENAME

DEALLOCATE FILENAME

FETCH NEXT FROM FG INTO @.fgroupid

Data Mart Maintenance

The database maintenance on our data mart SQL server is causing problems.The re-index process causes too much file growth and the processes are unable to complete due to disk space constraints.I want to write a process to re-index the database, but handle this a single file group at a time and then shrink the file group before going to the next one. I'm looking to write a maintenance routine (stored procedure) that will get all of the tables from a file group and re-index them. Any general ideas would be greatly appreciated.

I think I've found a reindexing solution to this problem I posted. I'm posting an answer here and if anyone wants to comment thats fine but I am doing it to help out other developers out there. If you scroll down farther you can see that I only reindexed clustered indexes in this Database and that is because when you reindex clustered indexes, tables with non-clustered indexes are automatically reindexed as well I believe.

-- cursor to loop through File Groups

DECLARE FG CURSOR FOR

SELECT DISTINCT groupid

FROM sysfilegroups

OPEN FG

FETCH NEXT FROM FG INTO @.fgroupid

WHILE @.@.Fetch_status=0

BEGIN

-- Cursor to loop through filenames

DECLARE FILENAME CURSOR FOR

SELECT DISTINCT f.name

FROM sysfiles f

INNER JOIN sysfilegroups fg ON f.groupid = @.fgroupid

OPEN FILENAME

FETCH NEXT FROM FILENAME INTO @.fname

WHILE @.@.Fetch_status=0

BEGIN

PRINT 'Current Filename = ' + @.fname

--Table Name Cursor

DECLARE TBL CURSOR FOR

--This query selects tables with clustered indexes only

SELECT DISTINCT 'DBCC DBREINDEX (' + i.TABLE_NAME + ')'

FROM INFORMATION_SCHEMA.TABLES i

INNER JOIN sysindexes si ON i.TABLE_NAME = object_name(si.id)

INNER JOIN sysfiles sf ON sf.groupid = si.groupid

WHERE objectProperty(object_id(i.TABLE_NAME), 'IsUserTable') = 1

AND objectProperty(object_id(i.TABLE_NAME), 'TableHasClustIndex')=1

AND sf.filename=@.fname

OPEN TBL

FETCH NEXT FROM TBL INTO @.tname

WHILE @.@.Fetch_status=0

BEGIN

PRINT @.SQLA

EXEC (@.SQLA)

FETCH NEXT FROM TBL INTO @.SQLA

END

CLOSE TBL

DEALLOCATE TBL

FETCH NEXT FROM FILENAME INTO @.fname

END

SELECT @.SQLB = 'DBCC SHRINKFILE ('+ @.fname+ ')'

EXEC (@.SQLB)

CLOSE FILENAME

DEALLOCATE FILENAME

FETCH NEXT FROM FG INTO @.fgroupid

Data Mart

What is Data Mart?
How does it help the data retrival process in terms of speed ?

Quote:

Originally Posted by yogeshbhandare

What is Data Mart?
How does it help the data retrival process in terms of speed ?


In some data warehouse implementations, a data mart is a miniature data warehouse; in others, it is just one segment of the data warehouse. Data marts are often used to provide information to functional segments of the organization.

for more information visit'
http://msdn2.microsoft.com/en-us/library/aa905978(SQL.80).aspx

Data marshalling from Sql server into business objects

Hi there,
I am having real problems finding the optimum way of retrieving data
from a Sql server and marshalling that into my c# business objects. I
have a fairly complex object hierarchy which I want to populate all in
a single shot (rather than relying on lazy loading which would cause
many roundtrips when the event's subordinates are enumerated). In my
object model...
an event has 0...n comments
a comment has 0...n visibility groups
an event has 1...n links
a link has 0...n link descriptors
an event has 1...n messages
In the relational world, this necessitates seven tables within my
schema :
tb_events
tb_events_comments
tb_events_comments_groups
tb_groups (pre-fetched)
tb_events_links
tb_links
tb_events_messages
tb_messages (pre-fetched)
The query I am using to retrieve this lot uses temporary tables and
goes like this...
/* -- START -- */
/* select all the events into a temporary table
this first statement can be modified if any queries are required
against
any of the subordinate
*/
select *
into #events
from tb_events
where is_deleted = 0
/* now select all the comments into a temporary table which
are subordinates of the events identified in query #1
*/
select *
into #events_comments
from tb_events_comments
where event_uid in (select uid from #events)
/* and the same for all the links
*/
select *
into #events_links
FROM tb_events_links
WHERE event_uid in (select uid from #events)
/* and now all the output queries which actually perform the retrieval
*/
select * from #events
select * from #events_comments
select * from tb_events_comments_groups where comment_uid in (select
uid from #events_comments)
select * from #events_links where event_uid in (select uid from
#events)
select * from tb_links where uid in (select link_uid from
#events_links)
select * from tb_link_descriptors where link_uid in (select link_uid
from #events_links)
select * from tb_events_messages where event_uid in (select uid from
#events)
/* and tidy up all the temp tables
*/
drop table #events
drop table #events_comments
drop table #events_links
/* -- END -- */
My question really is : is the above a valid way of retreiving all the
data required for population of the object hierarchy or are there any
nicer / faster / more elegant solutions? I've run it through Profiler
as both an ad-hoc script and a stored procedure (the sp is naturally
faster) and it doesn't seem to impose an unreasonable burden on the db.
The only other way I can think of is using a proxy object (EventProxy)
which contains "top line" details and if the full event is required
then it can be separately retrieved.
Surely there's a nicer way of doing it than this!?
Hope you can help,
MattI think your approach is suitable not knowing anything about your
situation. Volume is the biggest concern I would have. Are you pulling
one event, 10 events, all events? How many events in the table when you
query it? What kind of indexes? etc, etc.
Personally, I would not use the temp tables and would query the tables
directly. I might consider moving some of the subqueries to Table-value
functions for clarity and reuse.
Your biggest pain is going to be in your DAC code, where you have to
rebuild the object by looping through all these rows.
However, if you really want to be clever, you can make this one massive
query complete with all the joins, return it as XML, and serialize it
right into your object. Wouldn't that be ? Not easy at all, but
very .
--
700cb Development, Inc.
http://www.700cb.net
.NET utilities, developer tools,
and enterprise solutions
"Matt" <matt.jones@.studio10.co.uk> wrote in news:1132060815.255026.14610
@.g14g2000cwa.googlegroups.com:

> Hi there,
> I am having real problems finding the optimum way of retrieving data
> from a Sql server and marshalling that into my c# business objects. I
> have a fairly complex object hierarchy which I want to populate all in
> a single shot (rather than relying on lazy loading which would cause
> many roundtrips when the event's subordinates are enumerated). In my
> object model...
> an event has 0...n comments
> a comment has 0...n visibility groups
> an event has 1...n links
> a link has 0...n link descriptors
> an event has 1...n messages
> In the relational world, this necessitates seven tables within my
> schema :
> tb_events
> tb_events_comments
> tb_events_comments_groups
> tb_groups (pre-fetched)
> tb_events_links
> tb_links
> tb_events_messages
> tb_messages (pre-fetched)
> The query I am using to retrieve this lot uses temporary tables and
> goes like this...
> /* -- START -- */
> /* select all the events into a temporary table
> this first statement can be modified if any queries are required
> against
> any of the subordinate
> */
> select *
> into #events
> from tb_events
> where is_deleted = 0
> /* now select all the comments into a temporary table which
> are subordinates of the events identified in query #1
> */
> select *
> into #events_comments
> from tb_events_comments
> where event_uid in (select uid from #events)
> /* and the same for all the links
> */
> select *
> into #events_links
> FROM tb_events_links
> WHERE event_uid in (select uid from #events)
> /* and now all the output queries which actually perform the retrieval
> */
> select * from #events
> select * from #events_comments
> select * from tb_events_comments_groups where comment_uid in (select
> uid from #events_comments)
> select * from #events_links where event_uid in (select uid from
> #events)
> select * from tb_links where uid in (select link_uid from
> #events_links)
> select * from tb_link_descriptors where link_uid in (select link_uid
> from #events_links)
> select * from tb_events_messages where event_uid in (select uid from
> #events)
> /* and tidy up all the temp tables
> */
> drop table #events
> drop table #events_comments
> drop table #events_links
> /* -- END -- */
> My question really is : is the above a valid way of retreiving all the
> data required for population of the object hierarchy or are there any
> nicer / faster / more elegant solutions? I've run it through Profiler
> as both an ad-hoc script and a stored procedure (the sp is naturally
> faster) and it doesn't seem to impose an unreasonable burden on the db.
> The only other way I can think of is using a proxy object (EventProxy)
> which contains "top line" details and if the full event is required
> then it can be separately retrieved.
> Surely there's a nicer way of doing it than this!?
> Hope you can help,
> Matt
>

Data Mapping Database Report

I am working on a data mapping project (i.e. mapping tables from a SQL server database to VSAM files on a mainframe). I know that's bass-ackwards .. but that's what "the man" wants. And what "the man" wants he gets :)

What i am trying to do is to produce one report for each table (in the SQL server database). Each report would contain one column per field-name. Along with the field-name, i would like to also include the 10 most popular values of that field, a count for each, and one key per value (to refer-back).

If anyone is following along, do you have any suggestions ?? I'm assuming Crystal MUST have this functionality, but i'm banging my head against a wall trying to figure out even where to begin.

Thanks in advance !!The first part is easy - I presume you can use the report expert to get the raw data from the SQL table and show that as columns in the report. Am I right?

If you can do that, then you want to produce an individual count for each field - do you mean a count of unique elements for each field, and then the topN for each field.

Here's some sample data, by way of clarification:
ID Name City Salary
1 John A 20000
2 Fred A 20000
3 Mary A 50000
4 Kim B 35000
5 **** C 33000

A = 3 90000
B = 1 35000
C = 1 33000

If that is a very simple example, then you need to create a group for each field that you want to summarise, in this case City, and then create Count field in groop footer for City, and Sum field for Salary.

There's a start. TopN side of things should be able to be achieved using the TopN/Group Sort expert.

If you havew lots of fields, you'll have lots of groups.......my example groups on only City, so is simple, but your could end up looking rather ugly, I think!

Dave|||Hey Dave,

You're on the right track, but what i wanna do is "flip" the report around a little. Let's say i have a Customer table ... with SSN, Name, Addr.

I want my report to contain ONE ROW for each field-value ... not one column. My columns would be the Field-Name, Most Popular Occurence, Refer-Back key, and Count.

I would want my report to look like ...

Field Most popular value 1 Key (to refer back) Count (for that value)

SSN 111111111 111111111 1
222222222 222222222 1
333333333 333333333 1
...
123459699 123459699 1

Name John Smith 243050630 24
Dave Smith 294848372 22
Bob Jones 249858382 21
...
Tom Brown 385838375 11

Addr 123 Main St 948473859 6
123 Main Dr 938584894 6
222 Main St 983583959 4
...
123 Broadway 958347583 2

Thanks for your input !

I know that's ugly. My fields are lined up in my message text, but all of the spacing is ignored when the message is saved.

I want one row per field-name, and then 10 Values, refer-backs, and counts for each field name.

data mapping and migration

Has anyone used DTS packages for migrating old data to a new schema?

If so are there any tutorials on this?

I'd prefer not to do this by hand. ;-)Just use DTS graphic wizard.
BOL is the best tutorial|||Try the wizard in Enterprise Manager. That's a nice easy intro into it.|||I have a simple package where I want to dump data from one table to another. Its erroring out on this line.

DTSDestination("TaxExempt") =1

The destination field is int.

I don't receive any errors when i have the following:

DTSDestination("ShippingWeight") = DTSSource("weight")

for some reason it doesnt like the hardcoded value i supplied it.

Any ideas why?

Thanks for the help.

Data mapping and Importing app for SQL server

Hello,

Our company often receives data from outside sources to add to our application. This data is usually provided to us in Excel, CSV, XML, etc. The files that we receive usually have different columns from the columns in our database, so we have to map these columns to our table structure to import.

I'm looking for an application that will easily allow me to load up the data file (whatever type it may be), expose the columns in the data file, allow me to map these columns in our SQL server, then import the data. I know that this can be done as DTS, however I'm looking for alternatives. Does anyone have any recommendations?

Thanks in advance.

I just dont have a solution to your post but i have the same problem. I just want to get notifications on reply event of this thread. Hopefully some one will answer you and i will benefit.

Data mapping

We have 2 databases - Prod and Dev. They are not syncronized and has a slightly different stored proc-s and table structure (column added/ or removed).

So Prod and Dev are not in sync, they are not mirrors.

Production db has production data.

How I can transition/move/map data from Prod to Dev?

For example, both databases have table Users :(f_name, l_name, field1, field2). And I need to populate dev db with prod data.

What strategy should I follow? What tools showld I use? How I can automate the it?

Should I use Import wizard or DTS?

SQL Server 2000

Thank you

Based on the information you provide both the DTS Import Wizard or a Copy SQL Server Objects Task DTS package will work and both can be automated by a job schedule. Basically they are the same.

If you need to tranform data, which you con't mention you can use a DTS package with tranforms or write you own truncate/insert code and schedule either as a job.

Data Mapping

I'm looking for a tool to make a graphical representation of my MS SQL
databases. specifically, I was hoped to find something that could show the
fields in the tables of all the databases on an MS SQL server.
Anybody know an application that can do that?
Visio for enterprise architects
ERwin
"mindjuju" <mindjuju@.discussions.microsoft.com> wrote in message
news:71D75768-1B7E-4F06-8315-203D86FFAA8E@.microsoft.com...
> I'm looking for a tool to make a graphical representation of my MS SQL
> databases. specifically, I was hoped to find something that could show
> the
> fields in the tables of all the databases on an MS SQL server.
> Anybody know an application that can do that?
|||Hi,
You can use SQL Server Diagrams using Enterprise manager or else you can use
Erwin. Erwin will give you the Physical and Logical model of
your database.
Thanks
Hari
SQL Server MVP
"mindjuju" <mindjuju@.discussions.microsoft.com> wrote in message
news:71D75768-1B7E-4F06-8315-203D86FFAA8E@.microsoft.com...
> I'm looking for a tool to make a graphical representation of my MS SQL
> databases. specifically, I was hoped to find something that could show
> the
> fields in the tables of all the databases on an MS SQL server.
> Anybody know an application that can do that?
|||or ER-Studio
Greg Jackson
Pdx, Oregon

Data manipulation with One-to-many-to-one relationship

Is it possible to INSERT, UPDATE, DELETE data in this type of relationship? If so, how in VS2005? Not having issues with SELECT, even without joins.

Thanks

Hi,

Try providing more information about what you are trying to do. Do you have tables set up in a DB already? What columns do you have in those tables, and what are the references that make a given row of data in each table unique (primary keys)?

If you don't have a schema yet, provide (in your own words), what facts you are trying to convey - such as 'A customer places an order. Order has line items. Each product has a product number.' etc... This will give viewers here a better understanding of what you need.

Good luck, in any event. BRN..

|||

Let's say for example:

Table 1 has Table1ID (PK) and Table1Data

Table 2 has Table2ID (PK), Table1ID (FK for Table1.Table1ID), Table3ID (FK for Table3.Table3ID)

Table 3 has Table3ID (PK), and Table3Data

Relationship is one-to-many for Table1.Table1ID to Table2.Table1ID and one-to-many for Table3.Tat.ble3ID to Table2.Table3ID, creating a many to many relationship for Table1 to Table 3 using Table 2 as an intersection table. I do not have any cascading.

I am able to query the database and gather the information, but need to be able to add, edit, and delete in a C# VS2005 project. Starting off small with something like this would enable me to conquer a bigger project that I would like to create. This is the first time I have had to work with data that required this type of relationship.

I'd prefer to see the SQL.

Is this enough information?

Many thanks

|||OK, let take these 3 tables for example:

create table Employee (EmplNumber int not null primary key,EmplName nvarchar(20) not null)

create table Department (DeptNumber int not null primary key,DeptName nvarchar(50) not null)

-- relationship between Employee and Department

create table DeptEmpl (DeptNumber int not null foreign key references Department(DeptNumber)
ON DELETE CASCADE ON UPDATE CASCADE,
EmplNumber int not null foreign key references Employee(EmplNumber)
ON DELETE CASCADE ON UPDATE CASCADE)

insert into Employee values (1, 'John')
insert into Employee values (2, 'Terry')
insert into Employee values (3, 'Andrew')

insert into Department values (1, 'Marketing')
insert into Department values (2, 'Sales')
insert into Department values (3, 'Accounting')

insert into DeptEmpl values (1, 1)
insert into DeptEmpl values (1, 3)
insert into DeptEmpl values (2, 1)
insert into DeptEmpl values (2, 2)
insert into DeptEmpl values (2, 3)
insert into DeptEmpl values (3, 1)

?
There should be no problem when you try to insert data-- the PK/FK contraint will maintain the reference integrity for you. The key point here is theON DELETE/UPDATE CASCADE, they will help to maintain reference integrity during DELETE/UPDATE, so no extra work need to be done manually. If you have other concerns, feel free to post themSmile|||

Thanks Iori_Jay

It is completely clear to me what you are doing in this example. How does one handle user input variables from a web application?

Let's say the user wants to insert a new Employee 'Edgar' that belongs to Department 'Sales'.

|||OK, typical steps for such INSERT should be:

1. Insert the information for the new employee 'Edgar' toEmployee table, and get theEmplNumber for the new inserted employee
2. Check theDepartment table to see whether there is a 'Sales' department, if not then add the 'Sales' department.
3. Insert a row toDeptEmpl table to represent the relationship between 'Edgar' and 'Sales'.

So a sample stored procedure looks like:

CREATE PROCEDURE usp_InsertNewEmployee @.EmpName nvarchar(20), @.DeptName nvarchar(50)
AS
IF((LEN(@.EmpName)=0) OR (LEN(@.DeptName)=0))
RAISERROR('Please input both Employee name and the department name which he(she) belongs to',16,1)
DECLARE @.newEmpNumber INT,@.DeptNumber INT
SELECT @.newEmpNumber=max(EmplNumber)+1 FROM Employee
SELECT @.DeptNumber=DeptNumber FROM Department WHERE DeptName=@.DeptName

INSERT INTO Employee SELECT @.newEmpNumber, @.EmpName
IF (@.DeptNumber IS NULL)
BEGIN
SELECT @.DeptNumber=max(DeptNumber)+1 FROM Department
INSERT INTO Department SELECT @.DeptNumber,@.DeptName
END
INSERT INTO DeptEmpl SELECT @.newEmpNumber,@.DeptNumber
go

EXEC usp_InsertNewEmployee 'Edgar','Sales'|||

Iori_jay,

Thanks so much for your help. I've learned many things from this post. I appreciate you sharing your talents.

ICE|8

|||It's my pleasure to help with this issueSmile Wish you a happy festival!