Could you give more details about the flat file format and how you configured the flat file connection manager?
Thanks.
|||below are some output lines that interest me:DarrenSQLIS wrote:
Look at the output window. Anything of interest there?
Information: 0x40043007 at Data Flow Lockbox Validate File and Header Info, DTS.Pipeline: Pre-Execute phase is beginning.
Information: 0x402090DC at Data Flow Lockbox Validate File and Header Info, Flat File Lockbox [1]: The processing of file "c:\casestudy\lockbox\samplelockbox.txt" has started.
Information: 0x400490F4 at Data Flow Lockbox Validate File and Header Info, Lookup BankBatchID [373]: component "Lookup BankBatchID" (373) has cached 0 rows.
"Flat File Lockbox [1]" is the flat file source component. the flat file connection manager is connected to file "c:\casestudy\lockbox\samplelockbox.txt"|||
below is the entire contents of the text file:Bob Bojanic wrote:
Could you give more details about the flat file format and how you configured the flat file connection manager?
Thanks.
H080105 B1239-99Z-99 0058730760
I4001010003 181INTERNAT
C4001010004 01844400
I4002020005 151METROSPOOO1
C4002020006 02331800
I4003030009 MAGIC CYCLES
C4003030010 02697000
I4004040013 LINDELL
C4004040014 02131800
I4005040017 151GMASKI0001
C4005040019 01938800
the general tab of the flat file connection manager is configured as follows:
file name: c:\casestudy\lockbox\samplelockbox.txt
locale: english (united states)
unicode: unchecked
code page: 1252 (ANSI - Latin I)
format: ragged right
text qualifier: <none>
header row delimiter: {CR}{LF}
header rows to skip: 0
column names in first data row: unchecked|||
Can you try this?
In a copy of your package, delete everything after the flat file source.
Add a row count component after the Flat File Source (see http://msdn2.microsoft.com/en-us/library/ms141136(SQL.90).aspx for the row count component)
Now you can run the data flow with nothing but the flat file - if data flows to the row count, then your source is ok.
The reason I ask is becuase I see that your lookup component cached 0 rows - I wonder if that is actually the issue. Lookup row caching occurs on pre-execute so if there is a problem there, your flat file source will never even get started.
Donald Farmer
|||donald,i took your suggestion and copied my package. then, i deleted everything from the data flow task except a flat file source and a rowcount. i then ran this package and the data still wasn't flowing out of the flat file source. next, i deleted my data flow task and created a new data flow task. then, i added a flat file source and a rowcount to this new data flow. low and behold, that resolved the issue. it seems that my original data flow somehow became corrupted and that this prevented the data from flowing out of the flat file source. to me, this seems to be a bug. is there a way to repair my original data flow so that it won't be necessary for me to duplicate all of my previous work?|||
Well that's odd, for sure. You can select, copy and paste components from data flow to data flow, so you could try copying and pasting the rest of your original data flow into your new one and hooking it up. There will be messages about metadata needing fixed up, but it all is effectively identical it should be relatively easy to do so.
(Copy and back up that new data flow first of course.)
I don't really have any suggestions about what could have gone wrong. I do wonder if the new data flow is identical in all ways to the old one, but difficult to tell without examining them in detail.
Donald
|||donald,Donald Farmer wrote:
Well that's odd, for sure. You can select, copy and paste components from data flow to data flow, so you could try copying and pasting the rest of your original data flow into your new one and hooking it up. There will be messages about metadata needing fixed up, but it all is effectively identical it should be relatively easy to do so.
(Copy and back up that new data flow first of course.)
I don't really have any suggestions about what could have gone wrong. I do wonder if the new data flow is identical in all ways to the old one, but difficult to tell without examining them in detail.
Donald
you were correct about there being messages about metadata needing fixed up. below are the messages:
TITLE: Package Validation Error
Package Validation Error
ADDITIONAL INFORMATION:
Error at Data Flow Task [DTS.Pipeline]: input column "line" (158) has lineage ID 28 that was not previously used in the Data Flow task.
Error at Data Flow Task [DTS.Pipeline]: "component "Derived Column Checks 1" (156)" failed validation and returned validation status "VS_NEEDSNEWMETADATA".
Error at Data Flow Task [DTS.Pipeline]: One or more component failed validation.
Error at Data Flow Task: There were errors during task validation.
(Microsoft.DataTransformationServices.VsIntegration)
you previously stated that fixing this should be relatively easy to do so. so, how should i go about fixing this?|||
There should be a warning triangle in the components that need to be fixed up. Double click on those components to open the UI and the metadata may be fixed automatically, or you will be prompted with a mapping dialog to fix up the changes.
Donald
|||Donald Farmer wrote:
There should be a warning triangle in the components that need to be fixed up. Double click on those components to open the UI and the metadata may be fixed automatically, or you will be prompted with a mapping dialog to fix up the changes.
Donald
ok, that worked. thanks for your assistance.
No comments:
Post a Comment