Abstract
You can find here answers to some frequently asked questions.
After setup procedure finishes you'll have HipersonicSQL database created. While this is quite usable, Shark provides you an option to use other database vendor: DB2, PostgreSQL, MySQL,....
first you'll need to stop any Shark instance that may be running.
Edit the configure.properties
file and
set values for:
db_loader_job
name of the directory containing Octopus loader job, options are: db2, hsql, informix, msql, mysql, oracle, postgresql, sybase
db_ext_dirs
directory containing jar file(s) with JDBC driver, if
you need more then one directory specified here - use
${path.separator}
to concatenate
them
${db_loader_job}_JdbcDriver
classname of the JDBC driver you want to use
These entries are already filled with default values.
${db_loader_job}_Connection_Url
full database URL
These entries are already filled with default values, too.
${db_loader_job}_user
username for database authentication
${db_loader_job}_passwd
password for database authentication
run the configure.[bat|sh]
When loading newly created database, Octopus will complain about not being able to drop indices and tables, but theses warnings should be ignored.
At this time, sharkdb.properties file(that is placed in lib/client folder) and Shark.conf are adjusted to use selected database.
In the process of testing there will come the point you'll want to
clear the database and start from the scratch. For clearing the database
you may run the main configure.[bat|sh]
file. If
you don't want to wait for unnesessary filtering, archiving of
war
- you have
bin/recreateDB.[bat|sh]
.
The latter method runs only Octopus loader job to drop and create tables and indices.