I am trying to understand what things affect the
throughput of data when queries are executed to SQL
Server. I'm trying to determine if there are
settings,pramiters or times that can improve the speed of
a recordset being returned to the application.
What I have noticed in our environment is that a
recordset of 1,000 rows is returned very quickly where as
a recordset of a 1,000,000,000 rows can take over an hour.
I understand that the bandwidth will have a large effect
on the time but there are times when the bandwidth is
wide open and it still takes a significant amount of
time. What things can I do/set to optimixe this as much
as possible.
What can you do? Don't return 1 billion row resultsets, quite simple. No
human is able to digest that amount of information, so you should filter or
summarize it at the server using WHERE, SUM etc, and then return a more
limited number of rows to the client.
Jacco Schalkwijk
SQL Server MVP
"Jim Abel" <jim.abel@.lmco> wrote in message
news:5c7301c42d7f$a97fd480$a101280a@.phx.gbl...
> I am trying to understand what things affect the
> throughput of data when queries are executed to SQL
> Server. I'm trying to determine if there are
> settings,pramiters or times that can improve the speed of
> a recordset being returned to the application.
> What I have noticed in our environment is that a
> recordset of 1,000 rows is returned very quickly where as
> a recordset of a 1,000,000,000 rows can take over an hour.
> I understand that the bandwidth will have a large effect
> on the time but there are times when the bandwidth is
> wide open and it still takes a significant amount of
> time. What things can I do/set to optimixe this as much
> as possible.
|||The query is used to fill a CSV file that a third party
application uses to generate charts driven by the
ultimate Users. The reason I'm asking the question is to
see if there are options to speed the flow of data from
the Database to the CSV file to shorten the time it
currently takes. I can't modify the 3rd parties software
and I can't query less data so the ODBC seems like the
place to look. Your answer is logical for different
circumstances but doesn't get to the original question,
ARE there items that can be tweaked in ODBC that can
increase throughput?
>--Original Message--
>What can you do? Don't return 1 billion row resultsets,
quite simple. No
>human is able to digest that amount of information, so
you should filter or
>summarize it at the server using WHERE, SUM etc, and
then return a more[vbcol=seagreen]
>limited number of rows to the client.
>--
>Jacco Schalkwijk
>SQL Server MVP
>
>"Jim Abel" <jim.abel@.lmco> wrote in message
>news:5c7301c42d7f$a97fd480$a101280a@.phx.gbl...
of[vbcol=seagreen]
as[vbcol=seagreen]
hour.[vbcol=seagreen]
effect[vbcol=seagreen]
much
>
>.
>
|||What do you use to generate the csv file? bcp will be the quickest.
I--
Jacco Schalkwijk
SQL Server MVP
<anonymous@.discussions.microsoft.com> wrote in message
news:70ca01c42fae$ac9de360$a101280a@.phx.gbl...[vbcol=seagreen]
> The query is used to fill a CSV file that a third party
> application uses to generate charts driven by the
> ultimate Users. The reason I'm asking the question is to
> see if there are options to speed the flow of data from
> the Database to the CSV file to shorten the time it
> currently takes. I can't modify the 3rd parties software
> and I can't query less data so the ODBC seems like the
> place to look. Your answer is logical for different
> circumstances but doesn't get to the original question,
> ARE there items that can be tweaked in ODBC that can
> increase throughput?
>
> quite simple. No
> you should filter or
> then return a more
> of
> as
> hour.
> effect
> much
Showing posts with label executed. Show all posts
Showing posts with label executed. Show all posts
Thursday, March 29, 2012
Data throughput
Data throughput
I am trying to understand what things affect the
throughput of data when queries are executed to SQL
Server. I'm trying to determine if there are
settings,pramiters or times that can improve the speed of
a recordset being returned to the application.
What I have noticed in our environment is that a
recordset of 1,000 rows is returned very quickly where as
a recordset of a 1,000,000,000 rows can take over an hour.
I understand that the bandwidth will have a large effect
on the time but there are times when the bandwidth is
wide open and it still takes a significant amount of
time. What things can I do/set to optimixe this as much
as possible.What can you do? Don't return 1 billion row resultsets, quite simple. No
human is able to digest that amount of information, so you should filter or
summarize it at the server using WHERE, SUM etc, and then return a more
limited number of rows to the client.
Jacco Schalkwijk
SQL Server MVP
"Jim Abel" <jim.abel@.lmco> wrote in message
news:5c7301c42d7f$a97fd480$a101280a@.phx.gbl...
> I am trying to understand what things affect the
> throughput of data when queries are executed to SQL
> Server. I'm trying to determine if there are
> settings,pramiters or times that can improve the speed of
> a recordset being returned to the application.
> What I have noticed in our environment is that a
> recordset of 1,000 rows is returned very quickly where as
> a recordset of a 1,000,000,000 rows can take over an hour.
> I understand that the bandwidth will have a large effect
> on the time but there are times when the bandwidth is
> wide open and it still takes a significant amount of
> time. What things can I do/set to optimixe this as much
> as possible.|||The query is used to fill a CSV file that a third party
application uses to generate charts driven by the
ultimate Users. The reason I'm asking the question is to
see if there are options to speed the flow of data from
the Database to the CSV file to shorten the time it
currently takes. I can't modify the 3rd parties software
and I can't query less data so the ODBC seems like the
place to look. Your answer is logical for different
circumstances but doesn't get to the original question,
ARE there items that can be tweaked in ODBC that can
increase throughput?
>--Original Message--
>What can you do? Don't return 1 billion row resultsets,
quite simple. No
>human is able to digest that amount of information, so
you should filter or
>summarize it at the server using WHERE, SUM etc, and
then return a more
>limited number of rows to the client.
>--
>Jacco Schalkwijk
>SQL Server MVP
>
>"Jim Abel" <jim.abel@.lmco> wrote in message
>news:5c7301c42d7f$a97fd480$a101280a@.phx.gbl...
of[vbcol=seagreen]
as[vbcol=seagreen]
hour.[vbcol=seagreen]
effect[vbcol=seagreen]
much[vbcol=seagreen]
>
>.
>|||What do you use to generate the csv file? bcp will be the quickest.
I--
Jacco Schalkwijk
SQL Server MVP
<anonymous@.discussions.microsoft.com> wrote in message
news:70ca01c42fae$ac9de360$a101280a@.phx.gbl...[vbcol=seagreen]
> The query is used to fill a CSV file that a third party
> application uses to generate charts driven by the
> ultimate Users. The reason I'm asking the question is to
> see if there are options to speed the flow of data from
> the Database to the CSV file to shorten the time it
> currently takes. I can't modify the 3rd parties software
> and I can't query less data so the ODBC seems like the
> place to look. Your answer is logical for different
> circumstances but doesn't get to the original question,
> ARE there items that can be tweaked in ODBC that can
> increase throughput?
>
>
> quite simple. No
> you should filter or
> then return a more
> of
> as
> hour.
> effect
> much
throughput of data when queries are executed to SQL
Server. I'm trying to determine if there are
settings,pramiters or times that can improve the speed of
a recordset being returned to the application.
What I have noticed in our environment is that a
recordset of 1,000 rows is returned very quickly where as
a recordset of a 1,000,000,000 rows can take over an hour.
I understand that the bandwidth will have a large effect
on the time but there are times when the bandwidth is
wide open and it still takes a significant amount of
time. What things can I do/set to optimixe this as much
as possible.What can you do? Don't return 1 billion row resultsets, quite simple. No
human is able to digest that amount of information, so you should filter or
summarize it at the server using WHERE, SUM etc, and then return a more
limited number of rows to the client.
Jacco Schalkwijk
SQL Server MVP
"Jim Abel" <jim.abel@.lmco> wrote in message
news:5c7301c42d7f$a97fd480$a101280a@.phx.gbl...
> I am trying to understand what things affect the
> throughput of data when queries are executed to SQL
> Server. I'm trying to determine if there are
> settings,pramiters or times that can improve the speed of
> a recordset being returned to the application.
> What I have noticed in our environment is that a
> recordset of 1,000 rows is returned very quickly where as
> a recordset of a 1,000,000,000 rows can take over an hour.
> I understand that the bandwidth will have a large effect
> on the time but there are times when the bandwidth is
> wide open and it still takes a significant amount of
> time. What things can I do/set to optimixe this as much
> as possible.|||The query is used to fill a CSV file that a third party
application uses to generate charts driven by the
ultimate Users. The reason I'm asking the question is to
see if there are options to speed the flow of data from
the Database to the CSV file to shorten the time it
currently takes. I can't modify the 3rd parties software
and I can't query less data so the ODBC seems like the
place to look. Your answer is logical for different
circumstances but doesn't get to the original question,
ARE there items that can be tweaked in ODBC that can
increase throughput?
>--Original Message--
>What can you do? Don't return 1 billion row resultsets,
quite simple. No
>human is able to digest that amount of information, so
you should filter or
>summarize it at the server using WHERE, SUM etc, and
then return a more
>limited number of rows to the client.
>--
>Jacco Schalkwijk
>SQL Server MVP
>
>"Jim Abel" <jim.abel@.lmco> wrote in message
>news:5c7301c42d7f$a97fd480$a101280a@.phx.gbl...
of[vbcol=seagreen]
as[vbcol=seagreen]
hour.[vbcol=seagreen]
effect[vbcol=seagreen]
much[vbcol=seagreen]
>
>.
>|||What do you use to generate the csv file? bcp will be the quickest.
I--
Jacco Schalkwijk
SQL Server MVP
<anonymous@.discussions.microsoft.com> wrote in message
news:70ca01c42fae$ac9de360$a101280a@.phx.gbl...[vbcol=seagreen]
> The query is used to fill a CSV file that a third party
> application uses to generate charts driven by the
> ultimate Users. The reason I'm asking the question is to
> see if there are options to speed the flow of data from
> the Database to the CSV file to shorten the time it
> currently takes. I can't modify the 3rd parties software
> and I can't query less data so the ODBC seems like the
> place to look. Your answer is logical for different
> circumstances but doesn't get to the original question,
> ARE there items that can be tweaked in ODBC that can
> increase throughput?
>
>
> quite simple. No
> you should filter or
> then return a more
> of
> as
> hour.
> effect
> much
Subscribe to:
Posts (Atom)