Skip Menu |

This queue is for tickets about the SQL-Statement CPAN distribution.

Report information
The Basics
Id: 16928
Status: rejected
Priority: 0/
Queue: SQL-Statement

Owner: Nobody in particular
Requestors: Dan [...] DWright.Org

Bug Information
Severity: Critical
Broken in: 1.14
Fixed in: (no value)

Subject: Fetchrow_hashref doesn't work when you are selecting multiple agregates on the same value.
Download (untitled) / with headers
text/plain 584b
Attached is an updated 06build.t file for SQL::Statement, it demonstrates a problem with using fetchrow_hashref to get values from a query that uses aggregate functions. It turns out that if you select multiple aggregates from one value, for example: max(foo) as m, avg(foo) as a, sum(foo) as s ... and then attempt to get the results using fetchrow hashref, only the last value will exist in the hash. Additionally, the key for the value in the hash will be whatever the first value should have been. For the example above, we would get { 'm' => (the sum of 'foo') }
Download 06group.t
text/x-perl 1.7k
#!/usr/bin/perl -w $|=1; use strict; use Test::More; use lib qw( ../lib ); use vars qw($DEBUG); eval { require DBI; require DBD::File; }; if ($@) { plan skip_all => "No DBI or DBD::File available"; } else { plan tests => 7; } use SQL::Statement; printf "SQL::Statement v.%s\n", $SQL::Statement::VERSION; my $dbh=DBI->connect('dbi:File(RaiseError=1):'); $dbh->do($_) for <DATA>; my $sth=$dbh->prepare(" SELECT SUM(sales), MAX(sales) FROM biz "); ok('2700~1000^' eq query2str($sth),'AGGREGATE FUNCTIONS WITHOUT GROUP BY'); $sth=$dbh->prepare(" SELECT region,SUM(sales), MAX(sales) FROM biz GROUP BY region "); ok('West~2000~1000^East~700~700^' eq query2str($sth),'GROUP BY one column'); $sth=$dbh->prepare(" SELECT region,store,SUM(sales), MAX(sales) FROM biz GROUP BY region,store "); ok('West~Los Angeles~1500~1000^West~San Diego~500~500^East~Boston~700~700^' eq query2str($sth),'GROUP BY several columns'); $sth=$dbh->prepare(" SELECT region,store,SUM(sales) as s, MAX(sales) as m FROM biz GROUP BY region,store "); $sth->execute; my $data = $sth->fetchrow_hashref(); use Data::Dumper; diag Dumper($data); is( $data->{region}, 'West' ); is( $data->{store}, 'Los Angeles' ); is( $data->{s}, 1500 ); is( $data->{m}, 1000 ); sub query2str { my($sth)=@_; $sth->execute; my $str=''; while (my $r=$sth->fetch) { $str .= sprintf "%s^",join('~',map { defined $_ ? $_ : 'undef' } @$r); } return $str unless $DEBUG; printf "%s\n",join',',@{$sth->{NAME}}; print "<$str>\n"; return $str; } __END__ CREATE TEMP TABLE biz (region TEXT, store TEXT, sales INTEGER) INSERT INTO biz VALUES ('West','Los Angeles',1000 ) INSERT INTO biz VALUES ('West','San Diego' ,500 ) INSERT INTO biz VALUES ('West','Los Angeles',500 ) INSERT INTO biz VALUES ('East','Boston' ,700 )
Download (untitled) / with headers
text/plain 932b
I believe the problem is that hashes are being used to store information about the column aliases and column objects. The problem with this is that hashes must have unique keys, however, none of the metadata about the columns are necessarily unique. That is to say, it is perfectly valid to select the same column twice: SELECT a, a from foo; Or, to select multiple columns using the same alias: SELECT a as bar, b as bar from foo; Or, to perform multiple set functions on the same colum: SELECT sum(a) as s, min(a) as mn, max(a) as mx, avg(a) as avg from foo; I believe the solution is to stop saying "bar is an alias for a", and start saying "bar is an alias for column 0". In other words, either start keying the hashes with the numeric id for their column, or start storing aliases in an array. My suggestion is to put them in an array, since this would make it easier for expanding table.* columns later on.
Download (untitled) / with headers
text/plain 203b
This is impressive but seems to me more a bug of DBD::File than of SQL::Statement. Without featuring multiple selects in one statement, 2 functions shouldn't be computed correctly (which would be a bug).

This service is sponsored and maintained by Best Practical Solutions and runs on infrastructure.

Please report any issues with to