Categories

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.

RedHat Fedora 14 Release On the Way – 26 days left

Red Hat‘s Fedora remains one of the most popular and versatile Linux distributions available today.\r\n

Redhat Fedora 14 on the way to your desktop\r\n\r\nRedhat Fedora 14\r\n\r\n

\r\nRelease details are listed below:\r\n\r\nNew Software Versions\r\n

    \r\n

  • Linux Kernel 2.6.35 – Newest Linux Kernel
  • \r\n

  • Gnome 2.32 – Newest Gnome Desktop
  • \r\n

  • KDE Plasma Desktop 4.5.0 – Newest KDE Desktop
  • \r\n

  • OpenOffice.org 3.3 – Office suite
  • \r\n

  • NetBeans 6.9 – An open-source integrated development environmen (IDE)
  • \r\n

  • Eclipse Helios 3.6 – An open-source integrated development environmen (IDE)
  • \r\n

  • Python 2.7 – Updated Python to the upstream 2.7 release of Python Programming environment/language
  • \r\n

  • Perl 5.12 – Perl updated to 5.12 version
  • \r\n

  • MeeGo 1.0 – MeeGo Netbook UX 1.0 experience in Fedora
  • \r\n

\r\nFedora 14 Other Improvements, Changes and Features\r\n

    \r\n

  • Linux Kernel 2.6.35 – Linux 2.6.35 includes a wide range of new features
  • \r\n

  • Desktop Virtualization – Spice is a complete open source solution for interaction with virtualized desktops, focusing on the provision of high-quality remote access to QEMU machines.
  • \r\n

  • Faster JPEG Compression/Decompression – The replacement of libjpeg with libjpeg-turbo brings speed improvements
  • \r\n

  • New and Updated Programming Languages – Fedora 14 sees the introduction of D, a systems programming language combining the power and high performance of C and C++ with the programmer productivity of modern languages like Ruby and Python.
  • \r\n

  • Better Tools For Developers – Netbeans 6.9, Eclipse Helios 3.6, D programming
  • \r\n

  • KDE Plasma Desktop 4.5.0 – KDE 4.5.0 introduces window tiling and better notification features, along with many stability and performance improvements.
  • \r\n

  • Sugar 0.90 – The latest release of Sugar features major usability improvements for the first login screen and the control panel, as well as support for 3G networks.
  • \r\n

  • Improved Netbook Experience With MeeGo – The MeeGo Netbook UX 1.0 provides a user interface tailored specifically for netbooks, building on the foundations laid by Moblin in previous Fedora releases.
  • \r\n

  • IPMI Server Management Made Simple – ipmiutil, an easy-to-use, fully-featured IPMI server management utility that allows a wide range of management functions to be performed with just a few commands.\r\n\r\n
    Fedorea 14 Beta Desktop

    Fedorea 14 Beta Desktop

  • \r\n

Mozilla has just announced Firefox 4 beta is now available for Android and Maemo

Firefox 4 beta for mobile

Firefox 4 beta for mobile

\r\n\r\nMozilla has just announced Firefox 4 beta is now available for Android and Maemo. Saying, “A major focus of this release is to increase performance and responsiveness,” Mozilla says on its blog that Firefox 4 beta for mobile is, “built on the same technology platform as Firefox for the desktop,” but optimized for the mobile environment.\r\n\r\nThe mobile version features Firefox Sync to sync your phone browser with your desktop browser, the Awesome Screen and Bar, “which instantly gives you access to your recent history, bookmarks and tabs,” and probably most importantly this release features pinch-to-zoom as well.\r\n\r\nOn the technical side, Mozilla says:\r\n

“Two of the big architecture changes are Electrolysis and Layers. Our alpha contained Electrolysis which allowed the browser interface to run in a separate process from the one rendering Web content, resulting in a much more responsive browser. This beta brings the Layers pieces which improve overall performance and in graphics areas such as scrolling, zooming and animations.”

\r\nMozilla wants your help to test this beta version, go ahead to download the release, use it and post your feedback.

HP to Launch New WebOS Phones Soon (early Next Year)

HP will introduce smartphones in early 2011 using the WebOS software it acquired through its $1.2 billion Palm buy earlier this year.\r\n\r\n

HP-Palm-WebOS

HP-Palm-WebOS

\r\n\r\n”You will see us coming early next year with new phones,” Senior Vice President Eric Cador told an industry conference. HP bought Palm, an early pioneer in handheld devices, to tap into the fast-growing smartphone industry and get access to Palm’s acclaimed WebOS operating system.\r\n\r\n”More importantly we acquired WebOS,” Cador told the conference in Barcelona, adding Palm’s intellectual property was “extremely fundamental” in the deal.\r\n\r\nHP, the world’s largest technology company by revenue, is already a dominant force in PCs, servers, services and printers. But without a credible smartphone offering, it risked being left behind in a rising and highly profitable market, one that rivals both at home and in Asia are increasingly moving into.\r\n\r\nPC makers Dell, Lenovo and Acer are all pushing into smartphones, which offer advanced services such as streaming video, e-mail and GPS in addition to voice calls.\r\n\r\n
HP Palm Pre-Web OS Phone

HP Palm Pre-Web OS Phone

Online Schema Change for mySQL

\r\n

It is great to be able to build small utilities on top of an excellent RDBMS.

\r\n

\r\n\r\n

MySQL - Online Schema Change for mySQL

Online Schema Change for mySQL

\r\n

Thank you MySQL. This is a small but complex utility to perform online schema change for MySQL. We call it OSC and the source is here (Code is also available at bottom).

\r\nSome ALTER TABLE statements take too long form the perspective of some MySQL users. The fast index create feature for the InnoDB plugin in MySQL 5.1 makes this less of an issue but this can still take minutes to hours for a large table and for some MySQL deployments that is too long.\r\n\r\nA workaround is to perform the change on a slave first and then promote the slave to be the new master. But this requires a slave located near the master. MySQL 5.0 added support for triggers and some replication systems have been built using triggers to capture row changes. Why not use triggers for this? The openarkkit toolkit did just that with oak-online-alter-table. We have published our version of an online schema change utility (OnlineSchemaChange.php aka OSC).\r\n\r\nThe remainder of this document is copied from the internal documents that were written for this project. Note that this project was done by Vamsi and he did an amazing job with it. In addition to writing the tool, writing the docs and doing a lot of testing he also found and fixed or avoided a few bugs in MySQL to make sure OSC would be reliable.\r\n\r\nOverview\r\n\r\nIf the row format of database allows addition of a new column (possibly positioned at the end of existing columns with some default value) without modifying every row of the table, addition of a column could simply be just a metadata change which can be done very fast. In such databases, an exclusive lock is needed only for a very short time for the metadata change. Our understanding is that InnoDB row format does not allow this, and changing row format would be a significant project. Hence we do not consider this approach. Also, note that this approach would not work for operations like defragmentation.\r\n\r\nOSC algorithms typically have several phases:\r\n

    \r\n

  • copy – where they make a copy of the table
  • \r\n

  • build – where they work on the copy until the copy is ready with the new schema
  • \r\n

  • replay – where they propagate the changes that happened on the original table to the copy table. This assumes that there is a mechanism for capturing changes.
  • \r\n

  • cut-over – where they switch the tables ie rename the copy table as original. There is typically small amount of downtime while switching the tables. A small amount of replay is also typically needed during the cut-over.
  • \r\n

\r\nNote that the above operations can be done within the storage engine itself, or using an external (PHP) script. We followed the latter approach as it can be implemented much faster. An advantage of doing within storage engine is some optimizations can be done that are not available while doing these operations externally.\r\n\r\nCopy Phase\r\n\r\nWhen the binlog is enabled Innodb gets read locks on table S during a statement such as “insert into T select * from S”. To avoid this and to reduce the load on MySQL we select the data into an outfile and load from the outfile.\r\n\r\nRather than generating one big outfile, we do multiple scans of the table, where each scan covers about 500000 rows (batchsize is a parameter to the OnlineSchemaChange and default value is set to 500000). The first scan scans from start of the table using LIMIT 500000. Subsequent scans start at the posistion where the previous scan left off. For example, for a 2-column PK, if the 1st select reads till [x, y] the where clause of the 2nd select will look like ((col1 = x and col2 > y) OR (col1 > x)). We patched InnoDB to not get the read locks and expect to get rid of these outfiles in the future. However, we will continue to do multiple scans of table with each scan going after different rows than previous scans.\r\n\r\nFor efficiency, in innodb _plugin 5.1 we drop all non-clustered indexes before loading data into copytable, and recreate them after load. As a future optimization there may be some cases where it is useful to drop and recreate C index as well. We do not drop NC indexes in which 1st column is AUTO_INCREMENT column. Also, in innodb 5.0 we do not drop non-clustered indexes as recreating them is very slow.\r\n\r\nCapturing changes\r\n\r\nSome of the approaches for capturing changes for replay are as follows:\r\n

    \r\n

  • Use statement level binary log. Unfortunately, this approach is problematic when statements that refer to other tables are replayed. If the other tables are read during replay phase, they may return different data than what they returned during the original execution of the statement. If those statements update the other tables, those updates need to be skipped during replay.
  • \r\n

  • Use row level binary log. This approach would work assuming we filter out updates to other tables during replay. However many MySQL deployments don’t use row based replication (RBR) yes. Also, we need the original SQL in the binlog even when RBR is used and that feature has yet to appear in an official release.
  • \r\n

  • Use triggers to capture changes. This approach has extra overhead as changes to the table are recorded in a change capture table. Also, we need to get a short duration exclusive lock for creating triggers, as mysql does not allow creating triggers while holding READ lock. If we don’t get any lock while creating triggers, we risk losing changes done by transactions that are active at the time of selecting data into outfile, if those changes were done prior to creating triggers. The trigger approach has the advantage of less effort, and less risk of breaking stuff  so we adopt decided to use it for OSC.
  • \r\n

\r\nThe change capture table is referred to as the deltas table. It has has all columns as original table plus two additional columns: an integer autoincrement column to track order of changes AND an integer column to track dml type (insert, update or delete).\r\n

    \r\n

  • An insert trigger is created on the original table to capture all column values of row being inserted in deltas.
  • \r\n

  • A delete trigger is created on original table to capture only the PK columns of row being deleted in deltas.
  • \r\n

  • An update trigger is created on the original table so that if the update does not change the PK columns then it captures new values of all columns in deltas. If the update changes the PK columns, then the update trigger captures it as a delete followed by an insert.  A possible optimization in the future is to log only changed columns.
  • \r\n

\r\nReplay phase\r\n\r\nIt is useful to do most of the replay work without blocking changes to the original table. Mutliple replay passes are used and only the final replay is done under WRITE lock on the table. Because there are multiple passes we need to avoid replaying the same change multiple times. The following approaches are available to do this:\r\n

    \r\n

  • Delete the records from deltas as they are replayed. When a record is deleted, the entire record is put in transaction log (possibly containing large columns) and this might be too much load.
  • \r\n

  • Have a column ‘done’ in deltas and set it for the records as they are replayed. Updates generate less transaction log than delete, but if the ‘done’ column is not indexed, we will be scanning deltas on each pass anyway.
  • \r\n

  • save IDs of the replayed records in a temporary table so that OSC does not write to deltas.
  • \r\n

\r\nWe choose to save IDs in a temporary table.\r\n\r\nAnother consideration is how to propagate changes from deltas table to the copytable. There are at least two approaches:\r\n

    \r\n

  • select the columns from deltas table into PHP code and pass them back to mysql through update or insert or delete commands. This could move large column values back and forth between PHP and mysql.
  • \r\n

  • Only fetch the ID column in deltas to PHP code, and then construct the insert, update or delete statements such that column values are directly copied from deltas to copytable.
  • \r\n

\r\nWe use the to only feetch the ID columns.\r\n\r\nThere are three phases for replaying changes: after all outfiles are loaded, after indexes are recreated and during the swap phase.\r\n\r\nCut-over phase\r\n\r\nMysql offers two ways of renaming a table foo to bar.\r\n

    \r\n

  • Rename table ‘foo’ to ‘bar’. Multiple tables can be renamed atomically using rename command, which makes it attractive for swapping two tables. Unfortunately, this command cannot be executed while holding table locks, or inside a larger transaction (i.e rename has to be transaction all by itself). So we are unable to use this.
  • \r\n

\r\n

    \r\n

  • Alter table ‘foo’ rename ‘bar’. Alter table causes an implicit commit, but it can be last statement in a multi-statement transaction, and it can be executed while holding table locks. So we use this, but two tables cannot be swapped atomically using alter table command. We need to use two separate statements.
  • \r\n

\r\nOur cut-over phase looks like\r\n

    \r\n

  • lock tables (original table, new table, change capture table) in exclusive mode
  • \r\n

  • replay any additional changes that happened after last replay
  • \r\n

  • alter original table by renaming it as some old table
  • \r\n

  • alter copytable by renaming it as original table.
  • \r\n

\r\nSince alter table causes an implicit commit in innodb, innodb locks get released after the first alter table. So any transaction that sneaks in after the first alter table and before the second alter table gets a ‘table not found’ error. The second alter table is expected to be very fast though because copytable is not visible to other transactions and so there is no need to wait.\r\n\r\nError handling\r\n\r\nThere are two basic cases of errors:\r\n

    \r\n

  • Sql command issued by OSC fails due to some error, but mysql server is still up
  • \r\n

  • Mysql server crashes during OSC
  • \r\n

\r\nHere are the various entities created by OSC:\r\n

    \r\n

  • triggers
  • \r\n

  • new non-temporary tables (copy table, deltas table, backup table to which the original table is renamed)
  • \r\n

  • temp tables
  • \r\n

  • outfiles
  • \r\n

\r\nAs we create an entity, we use a variable to track its cleanup. For example, when we create deltas, we set a variable $this->cleanup_deltastable to indicate that deltas needs to be cleaned up. This is not necessary for temp tables as they are automatically nuked when the script ends. A cleanup() method does the cleanup based on these cleanup variables. The cleanup() method is used during both successful termination of the script as well as failure termination.\r\nHowever if mysql server crashes, cleanup steps would also fail. The plan to handle mysql failures is to have a mode ‘force_cleanup’ for the OSC script, which would cleanup all the triggers, non-temporary tables, and outfiles that would have been created by OSC.  One caution while using the force_cleanup mode is if the names of triggers/outfiles/tables that OSC would have created coincide with an existing trigger/outfile/table that has nothing to do with OSC, that entity may get dropped. The chances of coincidence are very slim though as we use prefixes like __osc_ for entities that OSC creates. This issue does not arise during regular cleanup (i.e non-forced) because cleanup is done based on cleanup variables in that case.\r\nNote that normally the failures during OSC don’t have to be acted on urgently, as the existence of stray tables/outfiles/triggers is not a critical problem. However, an exception is if failure happens after the original table is renamed to a backup table but before copy table is renamed as original table. In that case there should be two tables – backup table and copytable with identical contents except for the schema change. Applications would get ‘table not found’ errors until the issue is fixed. During force_cleanup, if it detects that both backup table and copytable exist, it renames backup table to original table.\r\n\r\nReplication\r\n\r\nOSC is is not really making any changes on its own, but only propagating the changes done by other transactions (which are replicated). So we set sql_log_bin = 0 for OSC. For schema changes like adding a column, this puts a requirement that the schema change must be done on slaves first. \r\n\r\nAssumptions that are validated in the code\r\n

    \r\n

  1. The original table must have PK. Otherwise an error is returned.
  2. \r\n

  3. No foreign keys should exist. Otherwise an error is returned.
  4. \r\n

  5. No AFTER_{INSERT/UPDATE/DELETE} triggers must exist. Otherwise create trigger would fail and error is returned.
  6. \r\n

  7. If PK is being altered, post alter table should support efficient lookups on old PK columns. Otherwise an error is returned. The reason for this assumption is that PHP code may have queries/inserts/updates/deletes based on old PK columns and they need to be effiicient. Another reason is during replay, the ‘where’ clauses generated have old PK columns and so replay phase would be very slow.
  8. \r\n

  9. If two OSCs are executed on same table concurrently, only the first one to create copytable would succeed and the other one would return an error.
  10. \r\n

  11. OSC creates triggers/tables/outfiles with prefix __osc_. If coincidentally objects with those names already exist, an error is returned as object creation would fail.
  12. \r\n

  13. Since we only tested OSC on 5.1.47 and 5.0.84, if it is not one of those two versions, it returns error.
  14. \r\n

\r\n   Assumptions that are NOT validated in the code\r\n

    \r\n

  1. Schema changes are done on slave before master. (If master has more columns than slave, replication may break. )
  2. \r\n

  3. If OSC is done concurrently with alter table on the same table, race condition could cause “lost schema changes”. For example if column foo is being added using OSC and column bar is being added using alter table directly, it is possible that one of the column additions is lost.
  4. \r\n

  5. Schema changes are backward compatible, such as addition of a column. Column name changes or dropping a column would cause error on the 1st load.
  6. \r\n

  7. When OSC is run with OSC_FLAGS_FORCE_CLEANUP, it drops triggers/tables/outfiles with prefix __osc_. So if coincidentally objects with those names exist that have nothing to do with OSC, they would get dropped.
  8. \r\n

\r\n  Steps in detail (listed in the order of execution)\r\n

    \r\n

  • Initialization
  • \r\n

  • create_copy_table
  • \r\n

  • alter_copy_table
  • \r\n

  • create_deltas_table
  • \r\n

  • create_triggers
  • \r\n

  • start snapshot xact
  • \r\n

  • select_table_into_outfile
  • \r\n

  • drop NC indexes
  • \r\n

  • load copy table
  • \r\n

  • Replay Changes
  • \r\n

  • recreate NC indexes
  • \r\n

  • Replay Changes
  • \r\n

  • Swap tables
  • \r\n

  • Cleanup
  • \r\n

\r\nThey are described in more detail below.\r\n\r\nSlight difference in the sequence of steps in 5.0 and 5.1\r\n\r\nNote that (unfortunately) we need to use slightly different sequences for 5.0 and 5.1 – and that is not good. This must be done to compensate for different behavior in those versions.\r\n\r\nThis order works in 5.1 but not 5.0 (I am only showing the relevant part of the sequence):\r\n

    \r\n

  1. Lock table in WRITE mode
  2. \r\n

  3. Create insert, update, delete triggers
  4. \r\n

  5. Unlock tables.
  6. \r\n

  7. Start snapshot transaction
  8. \r\n

  9. Scan deltas table and track these deltas in ‘changes to exclude from replay’
  10. \r\n

  11. Scan original table into multiple outfiles
  12. \r\n

  13. End snapshot xact
  14. \r\n

  15. Load data from all outfiles to copytable
  16. \r\n

  17. Replay changes that have not been excluded in step 5.
  18. \r\n

\r\nSince the scan done in step 6 should already see the changes captured in step 5, we exclude them from replay.\r\n\r\nThe above order does not work for 5.0 because creating trigger after locking table hangs in 5.0. See bug 46780.\r\n\r\nThis order works in 5.0 but not in 5.1\r\n\r\nSame as above except that 1 and 2 are reversed i.e create triggers before locking.\r\n\r\nNote that the table lock is for ensuring that transactions that changed the table before triggers were created are all committed. Any changes done after snapshot transaction began in step 4 should be captured by triggers. So even if we get table lock after creating triggers, the purpose of waiting for all prior transactions would still be achieved. So it should work in theory.\r\n\r\nHowever, this sequence does not work in 5.1 in my automated unit tests as it causes the scan in step 5 to exclude some changes from replay that are not captured in scan in step 6. (For example, if a concurrent xact updates row R, the snapshot xact step 5 is seeing the row in deltas table inserted by the update, but step 6 is seeing old image of row instead of new image).\r\n\r\nMySQL docs state\r\n\r\nFor transactional tables, failure of a statement should cause rollback of all changes performed by the statement. Failure of a trigger causes the statement to fail, so trigger failure also causes rollback.\r\n\r\nSo that means trigger is executed as part of same transaction as the DML that activated the trigger, right?  We don’t know why the snapshot xact in OSC is seeing the affect of trigger but not the affect of original DML and are not sure if this is a bug.\r\n\r\nCode Vocabulary/Glossary\r\n

    \r\n

  • $this->tablename is name of original table (i.e table being altered)
  • \r\n

  • $this->dbname is name of database
  • \r\n

  • $this->newtablename is name of copy table or new table
  • \r\n

  • $this->deltastable is name of [deltas] table
  • \r\n

  • $this->renametable is name to which the original table is renamed to before discarding
  • \r\n

  • $this->columns, $this->pkcolumns, $this->nonpkcolumns are comma separated lists of all columns, just pk columns and just non PK columns respectively of the original table
  • \r\n

  • $this->newcolumns and $this->oldcolumns are comma separated lists of columns of the original table prefixed by ‘NEW.’ and ‘OLD.’ respectively. Similarly we also have $this->oldpkcolumns and $this->newpkcolumns.
  • \r\n

  • IDCOLNAME and DMLCOLNAME are names of ID column and DML TYPE column in [deltas] table
  • \r\n

  • TEMP_TABLE_IDS_TO_EXCLUDE refers to temp table used for IDs to exclude. Its actual name is ‘__osc_temp_ids_to_exclude’.
  • \r\n

  • TEMP_TABLE_IDS_TO_INCLUDE refers to temp table used for IDs to include. Its actual name is ‘__osc_temp_ids_to_include’.
  • \r\n

  • $this->insert_trigger, $this->delete_trigger, and $this->update_trigger refer to trigger names.
  • \r\n

  • $this->get_pkmatch_condition($tableA, $tableB) generates condition of the form tableA.pkcolumn1=tableB.pkcolumn1 AND tableA.pkcolumn2=tableB.pkcolumn2 … where pkcolumn1, pkcolumn2 etc are PK columns of original table. tableA and tableB would be table references in the FROM clause.
  • \r\n

\r\nInitialization\r\n

    \r\n

  • Here we turn off bin log using ‘SET sql_log_bin = 0’.
  • \r\n

  • We do validations like checking for no foreign keys, checking that PK exists, and innodb version.
  • \r\n

  • We also retrieve all column information of the table being altered, so that we don’t have to read from information schema multiple times. (QUESTION: what happens if columns get changed by another alter table running in parallel? For now we assume that OPs is aware of alter table commands being run and won’t run two in parallel.)
  • \r\n

\r\nThis query retrieves column names.\r\n

$query = “select column_name, column_key from “.\r\n\r\n”information_schema.columns “.\r\n\r\n”where table_name =’%s’ and table_schema=’%s'”;\r\n\r\n$query = sprintf($query, $this->tablename,\r\n\r\n$this->dbname);

\r\nif column_key is not ‘PRI’, we infer that it is NOT part of primary key.\r\n

// for PK columns we need them to be in correct order as well.\r\n\r\n$query = “select * from information_schema.statistics “.\r\n\r\n”where table_name = ‘%s’ and TABLE_SCHEMA = ‘%s’ “.\r\n\r\n” and INDEX_NAME = ‘PRIMARY’ “.\r\n\r\n”order by INDEX_NAME, SEQ_IN_INDEX”;\r\n\r\n$query = sprintf($query, $this->tablename, $this->dbname);

\r\ncreate_copy_table\r\n\r\ncopy table is named as concatenate( ‘__osc_new_’, originaltablename) truncated to 64 characters (maxlen). This is done by ‘create table <copytable> LIKE <originaltable>’.\r\n\r\nalter_copy_table\r\n\r\n \r\n\r\nDDL command to alter original table is given as input. We replace original table name by copy table name by doing:\r\n

$altercopy = preg_replace(‘/alter\s+table\s+/i’,\r\n\r\n’ALTER TABLE ‘, $this->altercmd,\r\n\r\n-1, $count);\r\n\r\n$altercopy = preg_replace(‘/ALTER\s+TABLE\s+\r\n\r\n’.$this->tablename.’/’,\r\n\r\n’ALTER TABLE ‘.\r\n\r\n$this->newtablename,\r\n\r\n$altercopy, -1, $count);

\r\nThe command is then run to alter copytable in the same way as we want original table to look like after doing alter. If we have < 1 or > 1 matches in either of preg_replace mentioned above, exception is raised.\r\n\r\nNow we also retrieve index info using the following query so that we can drop and recreate NC indexes. (QUESTION : what happens if a concurrent alter table adds or drops index while this is running? For now we assume that operations is aware of alter table commands being run and won’t run two in parallel.\r\n

$query = “select * from information_schema.statistics “.\r\n\r\n”where table_name = ‘%s’ and “.\r\n\r\n”TABLE_SCHEMA = ‘%s’ “.\r\n\r\n”order by INDEX_NAME, SEQ_IN_INDEX”;\r\n\r\n$query = sprintf($query, $this->newtablename,\r\n\r\n$this->dbname);

\r\nThe following columns in select list are used:\r\n

    \r\n

  • NON_UNIQUE column: gives info on whether the index is non-unique.
  • \r\n

  • COLUMN_NAME gives the name of the column that is in the index.
  • \r\n

  • SUB_PART column indicates if index is on on part of column. (For example if an index is created on a varchar(1000) column, Innodb only creates index on first 767 chars. SUB_PART column gives this value.)
  • \r\n

  • INDEX_NAME gives the name of index. if name is ‘PRIMARY’ it is inferred to be primary index.
  • \r\n

\r\nWe also check if old PK (available in $this->pkcolumnarry) is a prefix of atleast one index after the alter table. Note that if old PK is (a, b) and after alter table there is an index on (b, a), that is OK as it supports efficient lookups if values of both a and b are provided. This check is done because replay would be very inefficient if lookup based on old PK columns is inefficient after the alter table.\r\n\r\ncreate_deltas_table\r\n\r\ncreates change capture table. It is named as concatenate(‘__osc_deltas_’, originaltablename) truncated to 64 characters (maxlen). created using:\r\n

$createtable = ‘create table %s’. ‘(%s INT AUTO_INCREMENT, ‘.\r\n\r\n’%s INT, primary key(%s)) ‘.\r\n\r\n’as (select %s from %s LIMIT 0)’;\r\n\r\n$createtable = sprintf($createtable, $this->deltastable,\r\n\r\nIDCOLNAME, DMLCOLNAME,\r\n\r\nIDCOLNAME, $this->columns,\r\n\r\n$this->tablename);

\r\ncreate_triggers\r\n\r\nAs mentioned before, in 5.1 we lock table and create triggers and then unlock table, but in 5.0, we create the triggers and then lock the table and unlock it.\r\n\r\nInsert trigger is created as:\r\n

$trigger = ‘create trigger %s AFTER INSERT ON %s’.\r\n\r\n’FOR EACH ROW ‘.\r\n\r\n’insert into %s(%s, %s) ‘. ‘values (%d, %s)’;\r\n\r\n$trigger = sprintf($trigger, $this->insert_trigger,\r\n\r\n$this->tablename,\r\n\r\n$this->deltastable, DMLCOLNAME,\r\n\r\n$this->columns, DMLTYPE_INSERT,\r\n\r\n$this->newcolumns);

\r\nDelete trigger is created as\r\n

$trigger = ‘create trigger %s AFTER DELETE ON’.\r\n\r\n’%s FOR EACH ROW ‘.\r\n\r\n’insert into %s(%s, %s) ‘. ‘values (%d, %s)’;\r\n\r\n$trigger = sprintf($trigger, $this->delete_trigger,\r\n\r\n$this->tablename,\r\n\r\n$this->deltastable, DMLCOLNAME,\r\n\r\n$this->pkcolumns, DMLTYPE_DELETE,\r\n\r\n$this->oldpkcolumns);

\r\nUpdate trigger is created as\r\n

// if primary key is updated, map the update\r\n\r\n// to delete followed by insert\r\n\r\n$trigger = ‘create trigger %s AFTER UPDATE ON’.\r\n\r\n’%s FOR EACH ROW ‘.\r\n\r\n’IF (%s) THEN ‘. ‘ insert into %s(%s, %s) ‘.\r\n\r\n’ values(%d, %s); ‘.\r\n\r\n’ELSE ‘. ‘ insert into %s(%s, %s) ‘.\r\n\r\n’ values(%d, %s), ‘. ‘ (%d, %s); ‘. ‘END IF’;\r\n\r\n$trigger = sprintf($trigger, $this->update_trigger,\r\n\r\n$this->tablename,\r\n\r\n$this->get_pkmatch_condition(‘NEW’, ‘OLD’),\r\n\r\n$this->deltastable, DMLCOLNAME,\r\n\r\n$this->columns,\r\n\r\nDMLTYPE_UPDATE, $this->newcolumns,\r\n\r\n$this->deltastable, DMLCOLNAME,\r\n\r\n$this->columns, DMLTYPE_DELETE,\r\n\r\n$this->oldcolumns,\r\n\r\nDMLTYPE_INSERT, $this->newcolumns);

\r\nstart snapshot xact\r\n\r\nHere we ‘start transaction with consistent snapshot’. At this point the deltas table may already have some changes done by transactions that have committed before out snapshot began. Since such changes are already reflected in our snapshot, we don’t want to replay those changes again during replay phase. So we also create a temp table named __osc_temp_ids_to_exclude to save the IDs of records that already exist in deltas table.\r\n

$createtemp = ‘create temporary table %s(%s INT, %s’.\r\n\r\n’INT, primary key(%s))’;\r\n\r\n$createtemp = sprintf($createtemp, $temptable,\r\n\r\nIDCOLNAME,\r\n\r\nDMLCOLNAME, IDCOLNAME);

\r\nSince innodb gets read locks during “insert into T1 select * from T2” state   ments, we select out into outfile and load from that. Outfile is created in ‘secure-file-priv’ folder with name concatenate(‘__osc_ex_’, $this->tablename).\r\n

$selectinto = “select %s, %s from %s “.\r\n\r\n”order by %s into outfile ‘%s’ “;\r\n\r\n$selectinto = sprintf($selectinto, IDCOLNAME,\r\n\r\nDMLCOLNAME, $this->deltastable,\r\n\r\nIDCOLNAME, $outfile);\r\n\r\n// read from outfile above into the temp table\r\n\r\n$loadsql = sprintf(“LOAD DATA INFILE ‘%s’ INTO’.\r\n\r\n’TABLE %s(%s, %s)”,\r\n\r\n$outfile, $temptable,\r\n\r\nIDCOLNAME, DMLCOLNAME);

\r\nselect_table_into_outfile\r\n\r\nIf an outfile folder is passed in, we use that. Otherwise, if @@secure_file_priv is non-NULL, we use it as outfile folder. Otherwise we use @@datadir/dbname as outfile folder. We assume @@datadir is non-NULL.\r\n\r\nOutfile is named as concatenate(‘__osc_tbl_’, originaltablename’); Since we use multiple outfiles, they are suffixed .1,.2,.3 etc.\r\n\r\nWe also commit snapshot xact here.\r\n\r\ndrop NC indexes\r\n\r\nIn 5.1 we iterate over the index info gathered in previous step and drop all indexes whose name is NOT ‘PRIMARY’. We also don’t drop indexes in which first column is AUTO_INCREMENT column. We use this command to drop index:\r\n\r\n$drop = sprintf(‘drop index %s on %s’, $this->indexname, $this->newtablename);\r\n\r\nIndexes are not dropped in 5.0 as mentioned before.\r\n\r\nload copy table\r\n\r\nWe use this command to load each outfile:\r\n

$loadsql = sprintf(“LOAD DATA INFILE ‘%s’ INTO”.\r\n\r\n”TABLE %s(%s)”,\r\n\r\n$this->outfile_table,\r\n\r\n$this->newtablename,\r\n\r\n$this->columns);

\r\nrecreate NC indexes\r\n\r\nWe iterate over the index info gathered in ‘alter_copy_table’ step and recreate all indexes whose name is NOT ‘PRIMARY’.\r\n\r\nWe use one alter table command to create all NC indexes.\r\n\r\nIf the ‘SUB_PART’ column value in information_schema.statistics is not-null we use it while building columnlist. For example, if SUB_PART value for column ‘comment’ is 767, we use ‘comment(767)’ in the columnlist passed to create index command.\r\n\r\nReplay changes\r\n\r\nAs mentioned before replay changes could be done multiple times. We maintain a temp table called TEMP_TABLE_IDS_TO_EXCLUDE to track those IDs that have been processed already. The set of IDs to process is obtained by taking the IDs from deltas table and excluding those that are in TEMP_TABLE_IDS_TO_EXCLUDE and is saved in TEMP_TABLE_IDS_TO_INCLUDE.\r\n

// Select from deltastable that are not in\r\n\r\n// TEMP_TABLE_IDS_TO_EXCLUDE.\r\n\r\n// Use left outer join rather than ‘in’ subquery for better perf.\r\n\r\n$idcol = $this->deltastable.’.’.self::IDCOLNAME;\r\n\r\n$dmlcol = $this->deltastable.’.’.self::DMLCOLNAME;\r\n\r\n$idcol2 = self::TEMP_TABLE_IDS_TO_EXCLUDE.’.’.self::IDCOLNAME;\r\n\r\n$selectinto = “select %s, %s “. “from %s LEFT JOIN %s ON %s = %s “.\r\n\r\n”where %s is null order by %s into outfile ‘%s’ “;\r\n\r\n$selectinto = sprintf($selectinto, $idcol, $dmlcol,\r\n\r\n$this->deltastable,\r\n\r\nself::TEMP_TABLE_IDS_TO_EXCLUDE,\r\n\r\n$idcol,\r\n\r\n$idcol2, $idcol2, $idcol, $outfile);

\r\nWe process about 500 rows in a transaction (except for the final replay which happens while holding WRITE lock on table, which is done without starting any new transaction).\r\n\r\nHere is the query to retrieve IDs and dml type from TEMP_TABLE_IDS_TO_INCLUDE.\r\n

$query = sprintf(‘select %s, %s from %s order by %s’,\r\n\r\nIDCOLNAME, DMLCOLNAME,\r\n\r\nTEMP_TABLE_IDS_TO_INCLUDE,\r\n\r\nIDCOLNAME);

\r\nDMLCOLNAME column tells if it is insert, delete or update.\r\n\r\nHere is how insert is replayed:\r\n

$insert = sprintf(‘insert into %s(%s) select %s’.\r\n\r\n’from %s where %s.%s = %d’,\r\n\r\n$this->newtablename,\r\n\r\n$this->columns,\r\n\r\n$this->columns,\r\n\r\n$this->deltastable,\r\n\r\n$this->deltastable,\r\n\r\nIDCOLNAME,\r\n\r\n$row[IDCOLNAME]);

\r\nHere is how delete is replayed:\r\n

$delete = sprintf(‘delete %s from %s, %s ‘.\r\n\r\n’where %s.%s = %d AND %s’,\r\n\r\n$newtable, $newtable,\r\n\r\n$deltas, $deltas,\r\n\r\nIDCOLNAME,\r\n\r\n$row[IDCOLNAME],\r\n\r\n$this->get_pkmatch_condition($newtable,\r\n\r\n$deltas));

\r\nHere is how update is replayed:\r\n

$update = sprintf(‘update %s, %s SET %s where ‘.\r\n\r\n’%s.%s = %d AND %s ‘,\r\n\r\n$newtable, $deltas,\r\n\r\n$assignment, $deltas,\r\n\r\nIDCOLNAME,\r\n\r\n$row[IDCOLNAME],\r\n\r\n$this->get_pkmatch_condition($newtable,\r\n\r\n$deltas));

\r\nSwap tables\r\n\r\n \r\n\r\nHere are the steps as mentioned in cut-over phase:\r\n

    \r\n

  • TURN AUTOCOMMIT OFF: ‘set session autocommit=0’ // without this lock tables is not getting innodb lock
  • \r\n

  • lock all tables in WRITE mode:
  • \r\n

\r\n

$lock = sprintf(‘lock table %s WRITE, %s WRITE, %s WRITE’, $this->tablename,\r\n\r\n$this->newtablename, $this->deltastable);

\r\n

    \r\n

  • final replay
  • \r\n

\r\n

$rename_original = sprintf(‘alter table %s rename %s’,\r\n\r\n$this->tablename, $this->renametable);\r\n\r\n$rename_new = sprintf(‘alter table %s rename %s’,\r\n\r\n$this->newtablename, $this->tablename);

\r\n

    \r\n

  • COMMIT // alter tables would have already caused implicit commits in innodb
  • \r\n

  • unlock tables
  • \r\n

  • TURN AUTOCOMMIT ON: ‘set session autocommit=1’
  • \r\n

\r\nCleanup\r\n

    \r\n

  • ROLLBACK in case we are in the middle of a xact
  • \r\n

  • Turn on autocommit in case we turned it off
  • \r\n

  • if trigger cleanup variables are set, drop triggers and unset trigger cleanup variables
  • \r\n

  • if outfile cleanup variables are set, delete the outfiles and unset outfile cleanup variables
  • \r\n

  • if cleanup variable is set for both newtable and renamedtable, then it means failure happened between the two alter tables. In this case just rename renamedtable as original table, and unset cleanup variable for renamedtable.
  • \r\n

  • if cleanup variable is set for newtable, renamedtable or deltas table, drop the corresponding tables, and unset corresponding cleanup variable
  • \r\n

\r\nIn the force cleanup mode we will pretend as though all cleanup variables are set, and use ‘drop if exists’.\r\n\r\nFor details Click Here, Thank you Mike.\r\n\r\nIf you faced any issue with original schema with single page, please Click Here

Motorola hopes for Android Tablet

Motorola co-chief Sanjay Jha painted a future for the company that included beefed-up smartphones and a tablet computer running on Google-backed Android software.\r\n\r\n

Motorola Android Tablet

Motorola Android Tablet

\r\n\r\nJha hoped Motorola would have an Android tablet computer ready early next year, he said while sharing thoughts at a Deutsche Bank Technology Conference in San Francisco. “I will only develop a tablet if it is sufficiently compelling,” Jha said during an on-stage chat. “Hopefully, that is early next year.”\r\n\r\nJha has been counting on smartphones running Android to help turn around the Illinois-based company’s flagging fortunes but didn’t consider the latest generation of the mobile software ready for use in tablets.\r\n\r\n”I see the tablet market as an opportunity; no cannibalization with smartphones,” Jha said. “iPad is more an extension of iPhone than a migration of a Macintosh. I think that is a natural expansion for us.”\r\n\r\nApple has sold millions of iPad tablet computers since the California company began selling them internationally in April.\r\n\r\n”The convergence of mobility and computing is very important for us,” Jha said. “There could be more form factors that are more smartphone-centric.”\r\n\r\nA priority in the Android “eco-system” is to improve the online marketplace for fun or functional applications that is key to the popularity of smartphones.\r\n\r\n”The Marketplace experience on Android is good, it is not great,” Jha said. “We are trying to rectify that situation. You will see us as an ecosystem very focused on that.”\r\n\r\nHe also expected more powerful multi-core computer chips to be built into smartphones in the coming year to boost capabilities, speed and features.\r\n\r\nCompetition in the smartphone market promises to be intense as this year finishes. The iPhone continues to be a hot seller and a host of smartphones based on Windows Phone 7 should debut soon with the new Microsoft software.\r\n\r\nBlackBerry maker Research In Motion is also due to release the latest generation of its mobile operating system.\r\n\r\n”Nearly everyone in this business is clicking on all cylinders.” Jha said.\r\n\r\nThe US remains the biggest market for high-end smartphones but Jha spoke of growing markets in China, India, and Latin America.\r\n\r\nJha noted that Motorola also makes TV set-top boxes and that it is “eager to participate” in an Internet-driven evolution of home entertainment that could involve routing digital content from smartphones to televisions.\r\n\r\n”You’ve seen Google TV and Apple TV in that space,” Jha said. “I think there are some very good opportunities there.”\r\n\r\nMotorola posted a six-fold increase in quarterly net profit in July and an optimistic outlook for its mobile phone division ahead of its separation next year.\r\n\r\nJha said at the time that demand was outstripping supply for the “Droid X,” an Android smartphones seen as Motorola’s answer to Apple’s iPhone.\r\n\r\nMotorola is selling most of its wireless network infrastructure business to Finnish-German giant Nokia Siemens Networks for 1.2 billion dollars.\r\n\r\nMotorola plans to split its businesses in the first quarter of next year, separating products for consumers from its professional equipment division.\r\n\r\nThe mobile and home entertainment devices division will operate as Motorola Mobility.\r\n\r\nThe other company, Motorola Solutions, will consist of its enterprise mobility solutions and networks businesses, which include two-way radios, mobile computers, secure public safety systems and scanners.

iPhone Case with flip-out keyboard

If you happen to love your iPhone to bits, but still have problems getting used to the virtual keyboard, then perhaps, just perhaps, you might want to consider bringing home the TK-421 iPhone Case with Flip-Out Keyboard.\r\n\r\n

iPhone Flip case

iPhone Flip case

\r\n\r\nThis will certainly provide you with the best of both worlds, where the physical QWERTY keyboard would increase your productivity and make typing on the iPhone a whole lot easier, while offering some form of protection to your favorite smartphone simultaneously.\r\n\r\nRetailing for $49.99 a pop, you can choose from either iPhone 3GS or iPhone 4 models, but unfortunately for those who have yet to place a pre-order, it is already out of stock at the moment. Well, if your boss gets you this for Christmas, you can be sure that he’s dropping a subtle hint for you to start replying emails the moment they are pushed to your inbox without complaining that you just don’t know how to use the virtual keyboard.

MySQL – How to create a database diagram basing on the image available

Often we make some sketches of the database we plan to create on a blackboard or a sheet of paper before we actually design its structure on computer. After that we discuss the entities we’ve got, normalize them and repeat these actions several times. As a result we get a completely approved database structure in the form of an image file in the project documentation. In this article we’ll try using Database Designer of dbForge Studio for MySQL.\r\n\r\nSuppose that you have a sketch of the future database:\r\n

\r\n\r\n
Database Structure Image

Database Structure

\r\n\r\nDatabase Structure\r\n\r\n

\r\nTo place this picture onto an empty diagram you should create an empty document, for example, Diagram1.dbd by pressing New Database Diagram on the Standard toolbar. After that you should press the New Image button on the Database Diagram toolbar. The mouse pointer will change to an icon with a picture. Click on any part of the diagram. In the Open dialog window that appeared select the image with the diagram structure sketch.\r\n\r\nDatabase Designer: Open New Image\r\n

\r\n\r\n
Database Designer - Open New Image

Open New Image

\r\n\r\n

\r\nNow as you see the database sketch you can recreate the database from it. Let’s create the necessary tables with Primary Key and indexes one by one. For example, to create the Sessions table press the New Table button on the Database Diagram toolbar. The mouse pointer should change to an icon with a table. Click on any part of the diagram. A window for editing the Table1 table should appear.\r\n

\r\n\r\nDatabase Designer: Create New table\r\n\r\nUsing the database editor window you should do the following:\r\n\r\n
Database Designer Create New table

Create New table

\r\n\r\n

\r\n

    \r\n

  • On the General tab edit the table name; add a key column (in this column you should edit its name, datatype, and set the Primary option); add all other columns (uncheck the additional Allow nulls(*) option)
  • \r\n

  • On the Indexes tab let’s create indexes for all key columns and uncheck the Unique option
  • \r\n

\r\nAs a result we’ve got a new entity on the diagram – the Sessions table.\r\n\r\nDatabase Designer: Design New Table\r\n

\r\n\r\n
Design New Table

Design New Table

\r\n\r\n

\r\nNow we can add a relation between the Hits and Sessions tables. To do this, you should:\r\n

    \r\n

  • press the New Relation button on the Database Diagram toolbar. The mouse pointer should change to an icon with an arrow. Then click the Hits table, and, without releasing the mouse button, drag the cursor to any part of the Sessions table and release the mouse button(**).
  • \r\n

  • in the Foreign Key Properties window that appeared select the SessionID column from the “Table Columns” columns list and press the [→] button. The SessionID column was moved to the “Constraints Columns” column list. Save these changes by pressing OK.
  • \r\n

\r\n

Database Designer: Create New Relation\r\n\r\n

\r\nAs a result, we’ve bound two tables – “Hits” and “Sessions” using the foreign key “hits_FK”.\r\n\r\n

Database Desinger: Create New Relation

Create New Relation

\r\n\r\n

\r\nDatabase Designer: Display Relation\r\n\r\nNow we should repeat the same operations as creating and designing tables, creating indexes and relations between tables.\r\n\r\n

Database Designer: Display Relation

Display Relation

\r\n\r\nAn important part of the database design process is logical division of database objects into groups. Database Designer available in dbForge Studio for MySQL has a special Container component for this purpose.\r\nTo create a new container and move the necessary objects into it you should:\r\n

    \r\n

  • Press the New Container button on the Database Diagram toolbar. The mouse pointer should change to an icon with three squares. Click on an empty place on the diagram. A container with the Group1 name appeared. Let’s change the container name;
  • \r\n

  • Select the tables you want to move to the container. For example, let’s select Users, Registrars, Products, and OrderLinks tables;
  • \r\n

  • Move the selected tables onto the container;
  • \r\n

\r\n

Database Designer: New Container

\r\n

\r\n\r\nAnd the final step in the process of database creation using a sketch is the optimization of\r\n\r\n
Database Designer: New Container

New Container

\r\n\r\ndatabase objects location on the diagram. The algorithm used by Layout Diagram is designed so that the program redraws the relations between tables so that they would not intersect each other. This allows to save space on the diagram and makes it readable.\r\n\r\n

\r\n

\r\nDatabase Designer: Layout Diagram\r\n\r\nAs a result of the actions described above we’ve created a database using a sketch without switching over to other applications displaying the image of the diagram using Alt+Tab or\r\n\r\n

Database Designer: Layout Diagram

Layout Diagram

\r\n\r\nprinting the sketch owing to the unique functionality of dbForge Studio for MySQL\r\n

    \r\n

  • On the diagram, columns with the Not Null property enabled are displayed in bold (for example, the HitDate column of the SpiderHits table) unlike other columns (for example, the HitUrl column of the SpiderHits table).
  • \r\n

  • To create Foreign Key between tables both these tables should have been created with Engine=InnoDB.
  • \r\n

\r\nYou can download evaluation copy of dbForge Studio for MySQL.

Page 10 of 19« First...89101112...Last »