Bookmark and Share

Saturday, April 25, 2009

User-Defined DDL LCRs

In "User-Defined DML LCRs", I've shown how to use lcr$_row_record.construct to build LCRs for DELETE, UPDATE and INSERT; In the same post, I've created an apply process to execute those LCRs. I wanted to add an example of a DDL LCR at the same time, but because of an error with my 11.1.0.7 AL32UTF8 database, I have not been able to.

Not a big deal! I've created a WE8ISO8859P15 database and I'll correct that miss fire right away. You'll find below a sample configuration that shows how to use lcr$_ddl_record.construct.

Streams Queue and Apply

We'll use the same schema for the queue and for the apply process and to apply the DDL. To speed up the setup, I've granted DBA to the schema owner; There are 3 points worth to mention:
  • I've set apply_capture parameter to false to make sure the apply process dequeues persistent messages
  • I did not add any rule set to the apply so that it dequeues every message in the queue
  • There is no need to define an instantiation SCN or to setup a SCN in the LCR (In the case of user-defined LCRs)
begin
dbms_streams_adm.set_up_queue(
queue_table => 'custom_queue_table',
queue_name => 'custom_queue');
end;
/
declare
v_name varchar2(256);
begin
select value into v_name
from v$parameter
where name='db_unique_name';
dbms_apply_adm.create_apply(
queue_name => 'custom_queue',
apply_name => 'custom_apply',
apply_captured => false,
source_database => v_name );
end;
/

exec dbms_apply_adm.start_apply('CUSTOM_APPLY');

Create and Enqueue A DDL LCR

Once everything ready you can build the LCRs and enqueue them in the queue so that the CREATE TABLE can be consumed and executed by the apply processes:
declare
v_name varchar2(256);
v_any anydata;
lcr sys.lcr$_ddl_record;
rc pls_integer;
enqopt DBMS_AQ.ENQUEUE_OPTIONS_T;
mprop DBMS_AQ.MESSAGE_PROPERTIES_T;
enq_msgid RAW(16);
begin
-- Get DB Name and SCN
select value into v_name
from v$parameter
where name='db_unique_name';

mprop.SENDER_ID := SYS.AQ$_AGENT(user, null, null);

lcr:=sys.lcr$_ddl_record.construct(
source_database_name => v_name,
command_type => 'CREATE TABLE',
object_owner => user,
object_name => 'MYTABLE',
object_type => 'TABLE',
ddl_text =>
'create table MYTABLE(id number primary key)',
logon_user => user,
current_schema => user,
base_table_owner => user,
base_table_name => 'MYTABLE',
tag => null,
transaction_id => null,
scn => null);

DBMS_AQ.ENQUEUE(
queue_name => 'custom_queue',
enqueue_options => enqopt,
message_properties => mprop,
payload => anydata.ConvertObject(lcr),
msgid => enq_msgid);
end;
/

commit;

desc mytable

Name Null? Type
---- -------- ------
ID NOT NULL NUMBER

declare
v_name varchar2(256);
v_any anydata;
lcr sys.lcr$_ddl_record;
rc pls_integer;
enqopt DBMS_AQ.ENQUEUE_OPTIONS_T;
mprop DBMS_AQ.MESSAGE_PROPERTIES_T;
enq_msgid RAW(16);
begin
-- Get DB Name and SCN
select value into v_name
from v$parameter
where name='db_unique_name';

mprop.SENDER_ID := SYS.AQ$_AGENT(user, null, null);

lcr:=sys.lcr$_ddl_record.construct(
source_database_name => v_name,
command_type => 'DROP TABLE',
object_owner => user,
object_name => 'MYTABLE',
object_type => 'TABLE',
ddl_text =>
'drop table MYTABLE purge',
logon_user => user,
current_schema => user,
base_table_owner => user,
base_table_name => 'MYTABLE',
tag => null,
transaction_id => null,
scn => null);
DBMS_AQ.ENQUEUE(
queue_name => 'custom_queue',
enqueue_options => enqopt,
message_properties => mprop,
payload => anydata.ConvertObject(lcr),
msgid => enq_msgid);
end;
/

commit;

desc mytable

ERROR:
ORA-04043: object mytable does not exist

Conclusion

This is it. With a WE8ISO8859P15 database, it works both with 10g and 11g and it should even work with Oracle Standard Edition... At least in theory!

Read more...

Tuesday, April 21, 2009

Oracle Streams Can Do Things, Mortals Can't! The Sequel.

In my post "Oracle Streams Can Do Things, Mortals Can't!", I've shown one of the challenges of applying changes, row by row, like Streams does. It can be the case when one of the impacted columns holds a unique, a primary key or a referential integrity constraint and you change several rows as part of the same update. I've also shown that Streams can actually handle those cases while you would have to DEFER the validation of those constraints to the commit time to perform the same operations manually.

I've written another post that has just been published with my other articles on The Pythian Group Blog. It's entitled "Delete and Re-Insert a Row in the Same Statement" and it's actually the sequel of my previous post. It shows how you can leverage Streams ability to run several SQL statements as if they were running as part of the same statement. You could use it to move rows... Even if your table doesn't have ROW MOVEMENT ENABLE and the constraints are not DEFERRABLE.
Read more...

User-Defined DML LCRs

One of the great features that comes with Oracle Streams is the ability you have to use the APIs to extend the framework and build almost anything you have in mind. Streams provides an easy way to share data across databases and with applications. In this post, I'll be providing an example of PL/SQL code that publishes DML LCRs in a queue; An Apply Handler will execute the content of those LCRs.

Test Schema

For the purpose of this demonstration, you can build one table T1 in the schema of your choice. Below is the script that creates that table and displays its content:
create table T1
(id number,
text varchar2(10),
constraint T1_PK primary key(id));

insert into T1 values (1, 'Text 1');

commit;

col id format 99
col text format a6

select id, text
from T1;

ID TEXT
-- ------
1 Text 1

Streams Queue and Apply

I use the same schema for the table, the queue and the apply process. To speed up the setup, I've granted DBA to the schema owner; There are 3 points worth to mention:
  • I've set apply_capture parameter to false to make sure the apply process dequeues persistent messages
  • I did not add any rule set to the apply so that it dequeues every message in the queue
  • There is no need to define an instantiation SCN or to setup a SCN in the LCR
begin
dbms_streams_adm.set_up_queue(
queue_table => 'custom_queue_table',
queue_name => 'custom_queue');
end;
/
declare
v_name varchar2(256);
begin
select value into v_name
from v$parameter
where name='db_unique_name';
dbms_apply_adm.create_apply(
queue_name => 'custom_queue',
apply_name => 'custom_apply',
apply_captured => false,
source_database => v_name );
end;
/

exec dbms_apply_adm.start_apply('CUSTOM_APPLY');

Create and Enqueue DML LCRs

Once everything ready you can build the LCRs and enqueue them in the queue so that the DELETE, UPDATE or INSERT can be consumed and executed by the apply processes; You'll find below 3 example of such user defined DML LCR
  • User Defined DELETE LCR
declare
v_name varchar2(256);
lcr sys.lcr$_row_record;
v_oldlist sys.lcr$_row_list;
v_oldid sys.lcr$_row_unit;
v_oldtxt sys.lcr$_row_unit;
enqopt DBMS_AQ.ENQUEUE_OPTIONS_T;
mprop DBMS_AQ.MESSAGE_PROPERTIES_T;
enq_msgid RAW(16);
begin
-- Get DB Name
select value into v_name
from v$parameter
where name='db_unique_name';

mprop.SENDER_ID := SYS.AQ$_AGENT(user, null, null);

v_oldid:=sys.lcr$_row_unit(
'ID',
anydata.convertnumber(1),
DBMS_LCR.NOT_A_LOB,
null,
null);
v_oldtxt:=sys.lcr$_row_unit(
'TEXT',
anydata.convertvarchar2('Text 1'),
DBMS_LCR.NOT_A_LOB,
null,
null);
v_oldlist:=sys.lcr$_row_list(v_oldid,v_oldtxt);
lcr:=sys.lcr$_row_record.construct(
source_database_name=>v_name,
command_type => 'DELETE',
object_owner=> user,
object_name => 'T1',
tag => null,
transaction_id => null,
scn => null,
old_values => v_oldlist);
DBMS_AQ.ENQUEUE(
queue_name => 'custom_queue',
enqueue_options => enqopt,
message_properties => mprop,
payload => anydata.ConvertObject(lcr),
msgid => enq_msgid);

end;
/
commit;

select *
from t1;

no rows selected
  • User Defined INSERT LCR
declare
v_name varchar2(256);
lcr sys.lcr$_row_record;
v_newlist sys.lcr$_row_list;
v_newid sys.lcr$_row_unit;
v_newtxt sys.lcr$_row_unit;
enqopt DBMS_AQ.ENQUEUE_OPTIONS_T;
mprop DBMS_AQ.MESSAGE_PROPERTIES_T;
enq_msgid RAW(16);
begin
-- Get DB Name
select value into v_name
from v$parameter
where name='db_unique_name';

mprop.SENDER_ID := SYS.AQ$_AGENT(user, null, null);

v_newid:=sys.lcr$_row_unit(
'ID',
anydata.convertnumber(1),
DBMS_LCR.NOT_A_LOB,
null,
null);
v_newtxt:=sys.lcr$_row_unit(
'TEXT',
anydata.convertvarchar2('Text 9'),
DBMS_LCR.NOT_A_LOB,
null,
null);
v_newlist:=sys.lcr$_row_list(v_newid,v_newtxt);
lcr:=sys.lcr$_row_record.construct(
source_database_name=>v_name,
command_type => 'INSERT',
object_owner=> user,
object_name => 'T1',
tag => null,
transaction_id => null,
scn => null,
new_values => v_newlist);
DBMS_AQ.ENQUEUE(
queue_name => 'custom_queue',
enqueue_options => enqopt,
message_properties => mprop,
payload => anydata.ConvertObject(lcr),
msgid => enq_msgid);

end;
/
commit;

select *
from t1;

ID TEXT
-- ------
1 Text 9
  • User Defined UPDATE LCR
declare
v_name varchar2(256);
lcr sys.lcr$_row_record;
v_oldlist sys.lcr$_row_list;
v_oldid sys.lcr$_row_unit;
v_oldtxt sys.lcr$_row_unit;
v_newlist sys.lcr$_row_list;
v_newid sys.lcr$_row_unit;
v_newtxt sys.lcr$_row_unit;
enqopt DBMS_AQ.ENQUEUE_OPTIONS_T;
mprop DBMS_AQ.MESSAGE_PROPERTIES_T;
enq_msgid RAW(16);
begin
-- Get DB Name
select value into v_name
from v$parameter
where name='db_unique_name';

mprop.SENDER_ID := SYS.AQ$_AGENT(user, null, null);

v_oldid:=sys.lcr$_row_unit(
'ID',
anydata.convertnumber(1),
DBMS_LCR.NOT_A_LOB,
null,
null);
v_newid:=sys.lcr$_row_unit(
'ID',
anydata.convertnumber(1),
DBMS_LCR.NOT_A_LOB,
null,
null);
v_oldtxt:=sys.lcr$_row_unit(
'TEXT',
anydata.convertvarchar2('Text 9'),
DBMS_LCR.NOT_A_LOB,
null,
null);
v_newtxt:=sys.lcr$_row_unit(
'TEXT',
anydata.convertvarchar2('Text 1'),
DBMS_LCR.NOT_A_LOB,
null,
null);
v_oldlist:=sys.lcr$_row_list(v_oldid,v_oldtxt);
v_newlist:=sys.lcr$_row_list(v_newid,v_newtxt);
lcr:=sys.lcr$_row_record.construct(
source_database_name=>v_name,
command_type => 'UPDATE',
object_owner=> user,
object_name => 'T1',
tag => null,
transaction_id => null,
scn => null,
old_values => v_oldlist,
new_values => v_newlist);
DBMS_AQ.ENQUEUE(
queue_name => 'custom_queue',
enqueue_options => enqopt,
message_properties => mprop,
payload => anydata.ConvertObject(lcr),
msgid => enq_msgid);

end;
/
commit;

select *
from t1;

ID TEXT
-- ------
1 Text 1

Conclusion

As you can see, creating and enqueuing DML LCRs is very simple.

I'm hiting an issue with DDL LCRs on top of 11.1.0.7/Linux x86 and, for now, the only way I manage to create one is by using dbms_streams.convert_xml_to_lcr. I'll write a new post as soon as my issue is fixed.

Read more...

Saturday, April 18, 2009

Oracle Streams Can Do Things, Mortals Can't!

In my last post, "Do Your Tables Need Keys Be Replicated With Streams?", I've introduced the "Manual Streams Simulator". The goal was to show how Oracle Streams works by using the simplest model we can. What I'll be showing now is that, like every tool or model, the "Manual Streams Simulator" has its limits. Actually I'll be pointing one of the things Streams can do, that you and your programs can NOT...

So, this post is about one of the many challenges Oracle Streams faces... because of its design. A very simple example will help you understand what I mean. Let's consider a table T1 and with a column COL1 and a unique constraint on COL1:
create table T1 (COL1 number,
constraint T1_UK unique(COL1));

insert into T1(COL1) values(1);
insert into T1(COL1) values(2);
commit;
To identify the rows in the database, you can query their ROWID:
select ROWID, COL1
from T1;

ROWID COL1
------------------ ----
AAATF5AABAAAVxpAAA 1
AAATF5AABAAAVxpAAB 2
With Oracle and with a single update, you turn the value 1 into 2 and turn the value 2 into 1; here is how:
update T1 set COL1=decode(COL1,1,2,1)
where COL1 in (1,2);
commit;
Note:
That's not the case with all the RDBMS; but if you work with Oracle that's not something you
If you don't trust me, check the ROWID after the update:
select ROWID, COL1
from T1;

ROWID COL1
------------------ ----
AAATF5AABAAAVxpAAA 2
AAATF5AABAAAVxpAAB 1
Just think about it: the Capture process or the synchronous capture would turn that update into 2 LCRs that would be applied by 2 differents statements by the server apply process. If you trace the Apply server process, you'll see each statement actually look like that:
update /*+ streams restrict_all_ref_cons  */ "DESTINATION"."T1" p
set "COL1"=decode(:1,'N',"COL1",:2)
where (:3='N') and (:4="COL1")
So? Do you see what I mean? If you want to execute 2 separate update statements to change 1 into 2 and 2 into 1, you will need to defer the unique constraint, right? A Streams apply executes the 2 updates without any defer! The reason I'm so sure, is because it will work even if the constraint is not deferrable on the destination.

How does it manage to do it? Obviously that's an internal trick! What I know for sure is that the SCN of the capture is involved with that and I strongly suspect the /*+ streams */ hint is part of the deal. But unless you get your hand on the code, I doubt you'll manage to guess more.

The good news is that you can leverage that internal trick for your own; in my next post, I'll show you how you could use Streams to run an INSERT and a DELETE as if they where executed at the exact same SCN. It will be posted with my other posts in my section of The Pythian Group Blog. Just be patient, I'll keep you updated.

Read more...

Sunday, April 12, 2009

Do Your Tables Need Keys Be Replicated With Streams?

There is a lot of misunderstanding regarding the "requirement" for unique or primary keys on tables you want to replicate with Oracle Streams. That's probably worth some explanations...

Check the questions below:
  • Do you need a primary or a unique key on a table to replicate it with Streams?
  • How would you replicate a table that contains duplicated rows?
  • What if the primary key of the destination table is different from the primary key of the source table?
This post, not only answers these questions, but also explains WHY things are like they are.

The Manual Streams Simulator

I'm not 100% sure what is the best way to understand the challenges of a SQL based replication. A way that has worked pretty well with me was to model it [1]. I've called that approach the Manual Streams Simulator.

The Manual Streams Simulator is nothing more than a reproduction of a Streams configuration, in which YOU immitate with SQL what Streams would capture, propagate or apply. Among the many advantages of this Simulator, there are its extreme simplicity and the fact it is free of code thus bug-free! For now, the way the replicated environment is instantiated is not of any real interest. For this reason we'll just build 2 tables and immitate a 1-way Streams table replication between those:
  • T1 is the source table; it contains 3 columns and 3 lines and doesn't have any index or key
  • T2 is a copy of T1 and will be used as the destination for the replication
The script below creates the 2 tables described above:
create table t1(
col1 number,
col2 number,
col3 number);

insert into t1 values (1,1,1);
insert into t1 values (1,2,2);
insert into t1 values (1,2,3);

commit;

col col1 format 9999
col col2 format 9999
col col3 format 9999
col rowid format a20

select col1, col2, col3, rowid
from t1;

COL1 COL2 COL3 ROWID
---- ---- ---- ------------------
1 1 1 AAASzVAABAAAVxpAAA
1 2 2 AAASzVAABAAAVxpAAB
1 2 3 AAASzVAABAAAVxpAAC

create table t2 as select * from t1;

select col1, col2, col3, rowid
from t2;

COL1 COL2 COL3 ROWID
---- ---- ---- ------------------
1 1 1 AAASzWAABAAAVxxAAA
1 2 2 AAASzWAABAAAVxxAAB
1 2 3 AAASzWAABAAAVxxAAC

Not all DML statements are equals

The first thing that comes to mind right after instantiating the tables is that not all DML statements are equals. The simplest DML statement you can get by far is insert ... values (); if it is executed on the source table:
insert into t1(col1, col2, col3)
values (2,3,4);

commit;
All you have to do is execute the same statement on the destination; In this case, we just need to change the table name from T1 to T2:
insert into t1(col1, col2, col3)
values (2,3,4);

commit;
With an update or a delete, it's different story. Try to transform the SQL statements below so they can run on the destination:
update T1 set col3=5 where rowid='AAASzVAABAAAVxpAAB';

commit;

delete from T1 where rowid='AAASzVAABAAAVxpAAC';

commit;
What you can guess with this simple example is probably the first challenge of a SQL based replication. In the case a row is modified or deleted on the source, the process that applies the changes on the detination needs to identify the corresponding row. In the case of new inserted rows, there is no such a need.

A consequence of the above is that, if you replicate a table that is an insert-only table, it will work in all the cases; unless it contains unsupported data types... It won't matter if you have duplicated rows in it, if you have not enabled any supplemental logging or if you don't have any way to identify rows, wether it's a key or something else. It will always work! Obviously it is a different story with deletes and updates.
Note:
In the case of a recovery or a physical standby, the changes including the ones generated by deletes or updates are re-applied to the exact same data, the exact same rows. For this reason those processes can use the rowid to regenerate the changes; there is no such the constraints like the constraints of the SQL-based replication. Database recovery and physical standby MRP can rely on the rowid to replicate the changes.

Value-Based and Row-Level Capture

Most of, not to say all, the applications have SQL statements like this one:
insert into mytable
values(1, sysdate);

commit;
If this statement is captured like it is executed, the resulting data would differ in many cases on the source and on the destination. This is because there is no way to guaranty the same statement can be applied at the same time on both side. To insure data don't differ on both side of the replication, the 2 streams implicit captures, Synchronous and Asynchronous, don't rely on the formulas included in the original SQL: They capture the values of the changes!

A direct consequence of this, is that when SQL statement modifies several rows, Streams implicit captures track the values of the changes for each one of the impacted rows; In fact they generate a Logical Change Record (LCR) for each one of the changed rows. These approaches also guaranty changes captured by a statement like the one below will be the same on the source and the destination:
delete from mytable
where col2<=sysdate-360;
commit;
Note
Nothing prevents you from creating a trigger on the source database, capturing the original DML statements with the ora_sql_text event attribute, managing it as user message and applying it on the destination database. However, the likelyhood it will result in different data on the destination database is very close to 100%. In addition, there is no easy or reliable way to detect what statement could lead to different data (and that's just for a 1-way replication!).

How to identify changed rows?

Lets get back to the Manual Streams Simulator and run a statement like the one below:
update T1 set col3=5 where rowid='AAASzVAABAAAVxpAAB';

commit;
How can you write a SQL statement that would guaranty the exact same row will be changed on the destination table?
  • When the source and destination tables have the same primary key
If the 2 tables have the same primary or unique key, it is easy to manage: capture the value of the key with the update and build the WHERE clause condition with it, instead of the rowid; Lets say the key is (COL2, COL3), what you would write to make sure the change will be the same on the source and destination is :
update T2 set col3=5
where col2=2 and col3=2;

commit;
The 2 Streams implicit captures behave exactly like the Manual Streams Simulator: (1) If you use the synchronous capture, the OLD and NEW values for all the columns are captured anyway so Streams Apply can easily build the WHERE clause condition like you did; (2) If you use the asynchronous capture, all you need is to add supplemental logging for the primary key or the unique keys on the source table to get to the same result.
  • When the source and destination tables have a row identifier that is not enforced by a key or when the keys on the 2 tables don't match
You won't make any difference between this second case and the previous one for the Manual Streams Simulator. If you know there is a unique identifier, even if it's not enforced by a key or if the key is different on the destination, you won't pay attention to that. Lets say the row identifier is (COL2, COL3) too, what you would write to make sure the change will be the same on the source and destination is identical to what you've written previously:
update T2 set col3=5
where col2=2 and col3=2;

commit;
Obviously, Streams cannot guess what a row identifier that is not enforced by a key is. You will have to help it. In the case of the synchronous capture, all the OLD and NEW values for all the columns are captured so Streams will be able to build the SQL that corresponds with the row change without changing anything on the capture side; You just need you tell it what the identifier is on the apply side; In the case of the asynchronous capture, columns values are not captured by default and you'll need is to add the supplemental logging for the columns that are the unique identifier; here is how to do it:
alter table T1
add supplemental log group T1_UID(COL2, COL3) always;
In both cases, you also need to tell the Apply process what the unique identifier is. There are 2 ways to do it:
  • you can use the dbms_apply_adm.set_key_columns procedure and define the identifier to use on the destination
  • you can create the primary key corresponding to the unique identifier of the source on the destination table. You can do it, even if the identifier you use is not enforced by a key on the source. Obviously it assume there is no duplicate values for the key on the destination.
Note:
If you don't create the key on the destination table and don't specify it with the dbms_apply_adm.set_key_columns procedure, the apply process assumes the key is made of all the columns of the destination table, except the ones of type LONG, LONG RAW and LOB. In the above example, if the update is captured by an asynchronous capture, it will fail with a message like the one below:
ORA-26787: The row with key ("COL1","COL2","COL3")= (,2,2) does not exist in table T2
ORA-01403: no data found
In the case of a synchronous capture, the same update will succeed because all the column values are captured and the LCR will actually contain (1,2,2)
  • When the source table doesn't have any unique identifier
In that case, Streams assumes all the columns, except the ones of type LONG, LONG RAW and LOB, make a key. If you use asynchronous capture, make sure you capture the data of all those columns by running the statement below:
alter table T1
add supplemental log data (all) columns;
Note
That approach may not work for you. If there are duplicate values for what Streams considers as an identifier, the replication will work until one of those duplicate values is modified. The Apply process will then generate an error that will look like the one below:
ORA-01422: exact fetch returns more than requested number of rows
And you'll have to manage that error manually. In most cases, using the allow_duplicate_rows parameter to workaround this situation is a bad idea; however if your table doesn't have any column of type LONG, LONG RAW and LOB, a DML handler that would modify only one of the several duplicated rows can help to work around that issue.

Retrieving rows efficiently on the destination

Lets now imagine T1 is 1 million rows, 1GB in size and you don't use any of those exadata storage servers; What if you run update T1 set col3=col3+1; on the source database? I confess this is probably not a good idea, even without Streams, but who knows what developers can write ?

Capturing this change with one of the Streams implicit capture may indeed be a very bad idea. It will result in 1 million LCR and 1 update per row on the destination. If the row identifier is made of one column and is not indexed on the destination, the apply server process will perform a "table full scan" for every one of the 1 million captured changes. You can easily imagine that will just make the replication impossible in this case.

So, if it doesn't really matter wether the source table has a primary key or not, the way you index the destination table can be very important for Streams to be viable. Keep that in mind when you setup a Streams replication with source tables that don't have primary keys.

Conclusion

The existence of keys, unique or primary, on the capture side of a Streams configuration greatly simplifies the replication. On the other hand, missing a key on a table doesn't mean it cannot be replicated with Streams. In most cases, you can avoid to create surrogate keys with a sequence and a trigger on the source, that won't be supported by most application:
  • If a table is an insert-only table, and doesn't have unsupported data types, it can always be replicated with the implicit captures,
  • If you can find a combination of columns to be used to identify rows, even if it's not enforced by a key, you can use it to setup the replication
  • If a table has duplicated rows but no LONG, LONG RAW and LOB, you can write a DML handler to manage the error associated with the change of one of these rows
One other important aspect to consider if you build a Streams configuration is its purpose. For example if you use Streams to migrate a database from AIX to Linux, or the other way around, managing 95% of the problem may be enough to meet your downtime requirements; You can then complete your Streams strategy with the method of your choice. And that's already another story...


[1] If I tend to agree with Moshez thoughts about "science is inductive reasoning" VS "deductive reasoning and (I would add) absurd reasoning are mathematics", I would like to emphasis that the Manual Streams Simulator doesn't have anything to do with science; it's engineering, or I should actually say reverse engineering.



Read more...