PDA

View Full Version : Replacing Database Back-End



Geoff Bonallack
2005-03-02, 15:13
Hi all,

I posted this on the main Slim list, but it's probably the wrong place
(given the lack of response).

Is it possible for the end user to change the back-end database engine
that is used, without any major code changes? For example, I have a
copy of SQL Server that I use for other stuff, and it would be nice to
incorporate my music DB into this too. I'm running XP.

Thanks
Geoff

Robert Moser
2005-03-02, 16:06
Yes, you can use a different back end. Currently MySQL is
semi-supported (it works, but is not officially supported). In theory,
any SQL back end supported by DBI would work.

To do it, you would need to do the following:
1) Install either the DBD::Sybase or DBD::ODBC drivers.
2) Create a directory for some SQL scripts, <slimroot>/server/SQL/ODBC
(if using DBD::ODBC) or <slimroot>/server/SQL/Sybase (if using
DBD::Sybase). Create an Upgrades directory under it.
3) Translate the dbclear.sql, dbcreate.sql, dbdrop.sql scripts from
either of the other directories into something the DBD driver of your
choice can work with.
4) Set the dbsource, dbusername, and dbpassword preferences to the
appropriate values.
For example: using DBD::ODBC with a system DSN of SlimServer you would
set dbsource to dbi:ODBC:SlimServer
The user specified should have the ability to create/drop tables.

If you've got all that working, then send in the scripts that worked.

Geoff Bonallack wrote:
> Hi all,
>
> I posted this on the main Slim list, but it's probably the wrong place
> (given the lack of response).
>
> Is it possible for the end user to change the back-end database engine
> that is used, without any major code changes? For example, I have a
> copy of SQL Server that I use for other stuff, and it would be nice to
> incorporate my music DB into this too. I'm running XP.
>
> Thanks
> Geoff
>

dean
2005-03-02, 16:26
It would be great if this stuff could be added to a documentation page
under Technical Information in the built-in help.

On Mar 2, 2005, at 3:06 PM, Robert Moser wrote:

> Yes, you can use a different back end. Currently MySQL is
> semi-supported (it works, but is not officially supported). In
> theory, any SQL back end supported by DBI would work.
>
> To do it, you would need to do the following:
> 1) Install either the DBD::Sybase or DBD::ODBC drivers.
> 2) Create a directory for some SQL scripts, <slimroot>/server/SQL/ODBC
> (if using DBD::ODBC) or <slimroot>/server/SQL/Sybase (if using
> DBD::Sybase). Create an Upgrades directory under it.
> 3) Translate the dbclear.sql, dbcreate.sql, dbdrop.sql scripts from
> either of the other directories into something the DBD driver of your
> choice can work with.
> 4) Set the dbsource, dbusername, and dbpassword preferences to the
> appropriate values.
> For example: using DBD::ODBC with a system DSN of SlimServer you
> would set dbsource to dbi:ODBC:SlimServer
> The user specified should have the ability to create/drop tables.
>
> If you've got all that working, then send in the scripts that worked.
>
> Geoff Bonallack wrote:
>> Hi all,
>> I posted this on the main Slim list, but it's probably the wrong place
>> (given the lack of response).
>> Is it possible for the end user to change the back-end database engine
>> that is used, without any major code changes? For example, I have a
>> copy of SQL Server that I use for other stuff, and it would be nice to
>> incorporate my music DB into this too. I'm running XP.
>> Thanks
>> Geoff
>>

Robert Moser
2005-03-02, 17:53
dean blackketter wrote:
> It would be great if this stuff could be added to a documentation page
> under Technical Information in the built-in help.

Patches welcome!



Sorry, couldn't resist

Geoff Bonallack
2005-03-03, 07:09
On Wed, 02 Mar 2005 15:06:41 -0800, Robert Moser wrote:
> Yes, you can use a different back end. Currently MySQL is
> semi-supported (it works, but is not officially supported). In theory,
> any SQL back end supported by DBI would work.
>
> To do it, you would need to do the following:
> 1) Install either the DBD::Sybase or DBD::ODBC drivers.
> 2) Create a directory for some SQL scripts, <slimroot>/server/SQL/ODBC
> (if using DBD::ODBC) or <slimroot>/server/SQL/Sybase (if using
> DBD::Sybase). Create an Upgrades directory under it.
> 3) Translate the dbclear.sql, dbcreate.sql, dbdrop.sql scripts from
> either of the other directories into something the DBD driver of your
> choice can work with.
> 4) Set the dbsource, dbusername, and dbpassword preferences to the
> appropriate values.
> For example: using DBD::ODBC with a system DSN of SlimServer you would
> set dbsource to dbi:ODBC:SlimServer
> The user specified should have the ability to create/drop tables.
>
> If you've got all that working, then send in the scripts that worked.

Great, thanks Robert - I'll have a play with it and see if I can get
it all working. As you say, I'll post the modified files once
everything comes into alignment. I'll also document the series of
steps, for posting in the wiki or similar.

Cheers
Geoff

Geoff Bonallack
2005-04-10, 21:42
On Mar 2, 2005 7:06 PM, Robert Moser <rlmoser (AT) comcast (DOT) net> wrote:
> Yes, you can use a different back end. Currently MySQL is
> semi-supported (it works, but is not officially supported). In theory,
> any SQL back end supported by DBI would work.
>
> To do it, you would need to do the following:
> 1) Install either the DBD::Sybase or DBD::ODBC drivers.
> 2) Create a directory for some SQL scripts, <slimroot>/server/SQL/ODBC
> (if using DBD::ODBC) or <slimroot>/server/SQL/Sybase (if using
> DBD::Sybase). Create an Upgrades directory under it.
> 3) Translate the dbclear.sql, dbcreate.sql, dbdrop.sql scripts from
> either of the other directories into something the DBD driver of your
> choice can work with.
> 4) Set the dbsource, dbusername, and dbpassword preferences to the
> appropriate values.
> For example: using DBD::ODBC with a system DSN of SlimServer you would
> set dbsource to dbi:ODBC:SlimServer
> The user specified should have the ability to create/drop tables.
>
> If you've got all that working, then send in the scripts that worked.
>
> Geoff Bonallack wrote:
> > Hi all,
> >
> > I posted this on the main Slim list, but it's probably the wrong place
> > (given the lack of response).
> >
> > Is it possible for the end user to change the back-end database engine
> > that is used, without any major code changes? For example, I have a
> > copy of SQL Server that I use for other stuff, and it would be nice to
> > incorporate my music DB into this too. I'm running XP.
> >
> > Thanks
> > Geoff

Ok, I have got it creating the database correctly now I think (for SQL
Server), based on Robert's suggestions. Unfortunately when I run
slimserver.pl I get an error:

2005-04-11 00:30:56.3123 Couldn't create track for file:///D:/MP3 :
Can't insert new Slim::DataStores::DBI::Track: DBD::ODBC::st execute
failed: [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot insert
the value NULL into column 'id', table 'slimserver.slim.tracks';
column does not allow nulls. INSERT fails. (SQL-23000).

If I set the tracks.id to allow NULL values then other stuff goes
wrong; but I would have thought that since the original schema
specifies the ID should be inserted correctly in the first place.
Does anyone have any idea why this is happening, or what I can do to
move on with it? I've attached the output from
slimserver.pl --d_info --d_scan
if it helps.

Cheers
Geoff

Dan Sully
2005-04-10, 21:44
* Geoff Bonallack shaped the electrons to say...

>2005-04-11 00:30:56.3123 Couldn't create track for file:///D:/MP3 :
>Can't insert new Slim::DataStores::DBI::Track: DBD::ODBC::st execute
>failed: [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot insert
>the value NULL into column 'id', table 'slimserver.slim.tracks';
>column does not allow nulls. INSERT fails. (SQL-23000).

All of the id columns are "auto increment" - does SQL Server have that concept?

-D
--
<iNoah> my pdp goes to 11.

Geoff Bonallack
2005-04-11, 05:42
On Apr 11, 2005 12:44 AM, Dan Sully <dan (AT) slimdevices (DOT) com> wrote:
> All of the id columns are "auto increment" - does SQL Server have that concept?

Yep - great, thanks Dan. In fact, this came to me last night at some
point, as these things always do after you ask. SQL Server does
indeed have an autoincrement ("IDENTITY") data type, but there are
specific requirements for interacting with them (specifically for
inserting new records and getting the last value that was generated).
When I fix the field type, I now get:

2005-04-11 08:24:10.7272 Couldn't create track for file:///D:/MP3 :
Can't insert new Slim::DataStores::DBI::Track: Can't get last insert
id at C:/Program
Files/SlimServer/server/Slim/DataStores/DBI/DBIStore.pm line 391
at C:/Program Files/SlimServer/server/Slim/DataStores/DBI/DBIStore.pm line 391

Is there somewhere I can tweak the actual SQL used to interact with the DB?
Also, does the list of autoincrement fields include moodlogic_id (in
tracks, contributors and genres)?

Cheers
Geoff

John A. Tamplin
2005-04-11, 06:18
On Sun, 10 Apr 2005, Dan Sully wrote:

> * Geoff Bonallack shaped the electrons to say...
>
> >2005-04-11 00:30:56.3123 Couldn't create track for file:///D:/MP3 :
> >Can't insert new Slim::DataStores::DBI::Track: DBD::ODBC::st execute
> >failed: [Microsoft][ODBC SQL Server Driver][SQL Server]Cannot insert
> >the value NULL into column 'id', table 'slimserver.slim.tracks';
> >column does not allow nulls. INSERT fails. (SQL-23000).
>
> All of the id columns are "auto increment" - does SQL Server have that concept?

Unfortunately, this is one area where database differences extend beyond
the DDL code required. Every database has different ways of supporting
autoincrementing identifier fields. Some you have to put 0, others you
have to put null, others you have to use a separate object (such as a
sequence) and use default values in the table definition that refer to the
other object.

For my DBI code that I care about portability between database backends, I
roll my own sequence numbers by having a separate table for them and just
update it as part of the transaction inserting the row needing a unique
id. It is a lot more work and it reduces concurrency if you have a lot of
parallel inserts, but it works consistently across all platforms.

--
John A. Tamplin jat (AT) jaet (DOT) org
770/436-5387 HOME 4116 Manson Ave
Smyrna, GA 30082-3723

Geoff Bonallack
2005-04-11, 06:51
On Apr 11, 2005 9:18 AM, John A. Tamplin <jat (AT) jaet (DOT) org> wrote:
> Unfortunately, this is one area where database differences extend beyond
> the DDL code required. Every database has different ways of supporting
> autoincrementing identifier fields. Some you have to put 0, others you
> have to put null, others you have to use a separate object (such as a
> sequence) and use default values in the table definition that refer to the
> other object.

Yep, SQL Server requires that the autoincrement column is omitted from
the INSERT statement.

>
> For my DBI code that I care about portability between database backends, I
> roll my own sequence numbers by having a separate table for them and just
> update it as part of the transaction inserting the row needing a unique
> id. It is a lot more work and it reduces concurrency if you have a lot of
> parallel inserts, but it works consistently across all platforms.

This is what I was heading towards - creating an INSERT trigger for
each table needing a ID, and obtaining a table lock on a (new,
separate) ID table and retrieving / incrementing the ID. As you point
out, though, this causes concurrency issues, and I guess it will
increase the initial scan time due to the extra processing.

Is there any chance that this is something I can address in
platform-specific SQL rather than by rolling a new table / stored
procedure combo? Have all SQL statements been abstracted somewhere on
a per-platform basis?

Cheers
Geoff

Robert Moser
2005-04-11, 07:28
Geoff Bonallack wrote:
> Also, does the list of autoincrement fields include moodlogic_id (in
> tracks, contributors and genres)?

That might be an autoincrement field, but only in the external moodlogic
database. In ours, it is just a number.

John A. Tamplin
2005-04-11, 07:43
On Mon, 11 Apr 2005, Geoff Bonallack wrote:

> On Apr 11, 2005 9:18 AM, John A. Tamplin <jat (AT) jaet (DOT) org> wrote:
> > Unfortunately, this is one area where database differences extend beyond
> > the DDL code required. Every database has different ways of supporting
> > autoincrementing identifier fields. Some you have to put 0, others you
> > have to put null, others you have to use a separate object (such as a
> > sequence) and use default values in the table definition that refer to the
> > other object.
>
> Yep, SQL Server requires that the autoincrement column is omitted from
> the INSERT statement.

Yuck, it treats an insert not listing the field differently from an
explicit null? Since the standard specifies that ommitting a field from
an insert is the same as inserting with a null value, that is a pretty
nasty way of doing it -- not that it is a big surprise MS doesn't follow
standards (to be fair, none of the autoincrement implementations are
standard).

> This is what I was heading towards - creating an INSERT trigger for
> each table needing a ID, and obtaining a table lock on a (new,
> separate) ID table and retrieving / incrementing the ID. As you point
> out, though, this causes concurrency issues, and I guess it will
> increase the initial scan time due to the extra processing.

You don't need to explicitly lock the table -- just do it all inside a
transaction (assuming you don't mind restricting the DBI backend to ones
that support transactions):

begin work
select next_id from serial_values where table_name='track';
update serial_values set next_id=next_id+1 where table_name='track';
insert into tracks ($trackid,...)
commit work

That way you can at least have concurrent inserts going into different
tables (at least if the serial_values table has row locking).

--
John A. Tamplin jat (AT) jaet (DOT) org
770/436-5387 HOME 4116 Manson Ave
Smyrna, GA 30082-3723

Steve Baumgarten
2005-04-11, 12:34
John A. Tamplin wrote:

> begin work
> select next_id from serial_values where table_name='track';
> update serial_values set next_id=next_id+1 where table_name='track';
> insert into tracks ($trackid,...)
> commit work
>
> That way you can at least have concurrent inserts going into different
> tables (at least if the serial_values table has row locking).

You actually do need a table lock here, or at least you need to do the
update first in order to get an exclusive lock on the row -- there's a
race condition otherwise.

Here's why: if two processes each execute that code at exactly the same
time, each process will execute "begin work" and start a transaction;
then each will execute the "select next_id" statement and retrieve
exactly the same value from the row, as the select is a shared lock and
doesn't prevent other processes from reading the data.

So now you have two processes each with the same ID in memory, but each
thinking that it has a unique ID. No good.

Process B will then wait until process A has committed its transaction
and then go ahead and finish its own, with the result you'll get a
duplicate key error on the insert into the tracks table.

The other way to do this is to do the update first, then the insert,
then select the value, then commit the transaction. That avoids the race
condition and ensures that each process has a unique ID returned to it
from the select:

begin tran
update serial_values
set next_id = next_id + 1 where table_name = 'track'
insert into tracks
select s.next_id, [...]
from serial_values s
where table_name = 'track'
select next_id
from serial_values where table_name = 'track'
commit

This works because the update statement places an exclusive lock on the
row immediately after beginning the transaction. Thus if two processes
are executing the code at exactly the same time, the second process will
wait right at that point for the first to commit. This allows the first
to do other things with the newly incremented value without fear that
another process will be able to sneak in and read/use the new value
behind its back.

(Of course this is all moot for a server environment where there's only
ever going to be one client that writes to the database, i.e., the
Slimserver. It's more just a general database programming point.)

SBB





Visit our website at http://www.ubs.com

This message contains confidential information and is intended only
for the individual named. If you are not the named addressee you
should not disseminate, distribute or copy this e-mail. Please
notify the sender immediately by e-mail if you have received this
e-mail by mistake and delete this e-mail from your system.

E-mail transmission cannot be guaranteed to be secure or error-free
as information could be intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain viruses. The sender therefore
does not accept liability for any errors or omissions in the contents
of this message which arise as a result of e-mail transmission. If
verification is required please request a hard-copy version. This
message is provided for informational purposes and should not be
construed as a solicitation or offer to buy or sell any securities or
related financial instruments.

John A. Tamplin
2005-04-11, 13:52
On Mon, 11 Apr 2005, Steve Baumgarten wrote:

> You actually do need a table lock here, or at least you need to do the
> update first in order to get an exclusive lock on the row -- there's a
> race condition otherwise.

Sorry, I was being fast and loose with the example code. I actually do
a select ... for update of ... which places an exclusive lock on the row.
Of course, that may not be portable to all the DBI backends, but it worked
for all the ones I was using at the time and was encapsulated in one
place. Your update first mechanism works just as well and is probably
more portable.

--
John A. Tamplin jat (AT) jaet (DOT) org
770/436-5387 HOME 4116 Manson Ave
Smyrna, GA 30082-3723

Steve Baumgarten
2005-04-11, 14:14
John A. Tamplin wrote:

> Sorry, I was being fast and loose with the example code. I actually do
> a select ... for update of ... which places an exclusive lock on the row.
> Of course, that may not be portable to all the DBI backends, but it worked
> for all the ones I was using at the time and was encapsulated in one
> place. Your update first mechanism works just as well and is probably
> more portable.

No problem, I was just being pedantic (also, I did it the wrong way not
too long ago and actually did get bitten by the race condition, so I
figured I might save someone else the grief I went through).

I like the "for update" syntax, but I work with Sybase primarily and
with Sybase that's only valid inside a cursor. It's a useful extension,
though, certainly more so than Sybase's "HOLDLOCK" kludge.

SBB




Visit our website at http://www.ubs.com

This message contains confidential information and is intended only
for the individual named. If you are not the named addressee you
should not disseminate, distribute or copy this e-mail. Please
notify the sender immediately by e-mail if you have received this
e-mail by mistake and delete this e-mail from your system.

E-mail transmission cannot be guaranteed to be secure or error-free
as information could be intercepted, corrupted, lost, destroyed,
arrive late or incomplete, or contain viruses. The sender therefore
does not accept liability for any errors or omissions in the contents
of this message which arise as a result of e-mail transmission. If
verification is required please request a hard-copy version. This
message is provided for informational purposes and should not be
construed as a solicitation or offer to buy or sell any securities or
related financial instruments.

Geoff Bonallack
2005-04-12, 07:57
Thanks for all the pointers and suggestions; I've got it reliably
creating the database structure, containing a trigger to insert IDs.
I'm still getting an error when the first 'tracks' insert happens
though. From the command prompt:

2005-04-11 23:05:57.8859 Couldn't create track for file:///D:/MP3 :
Can't insert new Slim::DataStores::DBI::Track: Can't get last insert
id at C:/Program
Files/SlimServer/server/Slim/DataStores/DBI/DBIStore.pm line 391
at C:/Program Files/SlimServer/server/Slim/DataStores/DBI/DBIStore.pm line 391

The query that is trying to execute is:
declare @p1 int
set @p1=8
exec sp_prepexec @p1 output,N'@P1 varchar(80),@P2 varchar(80),@P3
varchar(80),@P4 int,@P5 varchar(80)',N'INSERT INTO tracks (titlesort,
url, title, tag, ct)
VALUES (@P1, @P2, @P3, @P4, @P5)
','MP3','file:///D:/MP3','MP3',1,'dir'
select @p1

A bunch of research indicates that this is the way that the ODBC
driver talks to the database, and that the above translates into a
query that looks like:

INSERT INTO tracks (titlesort, url, title, tag, ct)
VALUES ('MP3', 'file:///D:/MP3', 'MP3', 1, 'dir')

This works fine if I run it separately, but the original (full) ODBC
statement fails to prepare correctly, resulting in an error.
Does anyone have any suggestions for this?

Thanks
Geoff

cdoherty
2005-04-21, 00:07
On Apr 11, 2005 9:18 AM, John A. Tamplin <jat (AT) jaet (DOT) org> wrote:[color=blue]
This is what I was heading towards - creating an INSERT trigger for
each table needing a ID, and obtaining a table lock on a (new,
separate) ID table and retrieving / incrementing the ID. As you point
out, though, this causes concurrency issues, and I guess it will
increase the initial scan time due to the extra processing.


if you're doing locking, you'd only experience contention problems rather than concurrency, and that only at relatively high transaction rates (e.g. in Oracle you'd have a MAX_ID table with one row that you'd do a SELECT ... FOR UPDATE on, so for a global ID sequence, every INSERT would have to wait for their turn at that row). but based on my experience I'd expect that to only be a problem at sustained rates of many thousands of transactions per minute.

chris

John A. Tamplin
2005-04-21, 05:19
On Thu, 21 Apr 2005, cdoherty wrote:

> if you're doing locking, you'd only experience contention problems
> rather than concurrency, and that only at relatively high transaction
> rates (e.g. in Oracle you'd have a MAX_ID table with one row that you'd
> do a SELECT ... FOR UPDATE on, so for a global ID sequence, every INSERT
> would have to wait for their turn at that row). but based on my
> experience I'd expect that to only be a problem at sustained rates of
> many thousands of transactions per minute.

Of course that depends on the complexity and duration of the transactions
containing the id fetch.

--
John A. Tamplin jat (AT) jaet (DOT) org
770/436-5387 HOME 4116 Manson Ave
Smyrna, GA 30082-3723

Conroy
2006-08-27, 19:40
Any chance I could get a copy of the SQL Server prototype code that was being attempted here? Did you ever get it working. I have some experience with MS SQL Server, and I may give it a go to see if I can get it working if it is still broken. Please little 'r' me at conor.cunningham@comcast.net.

Thanks,

Conor

PS: The ANSI Standard doesn't cover the behavior of the pre-sequence stuff in SQL 2003, and inserting into an identity column with a zero or null isn't really compliant with the standard either - it ignores those values and inserts a new counter in the other back-ends.

PPS: It would really be useful to allow the SQL generation layer to be parameterized a bit, as performance will suffer non-trivially with the trigger-based approach. The side-table approach will work, but it won't scale too well (as mentioned).