You’re best using the API over modifying the database directly,
because then things like history can be updated.
I think you might have to deal with egw_api_content_history.
That was my original intention for all my import processes. However getting access to just the eGW API without it moaning, and without having to write an application for eGW, was a lot more trouble than it was worth I simply tailed the SQL query log to discover what was actually being modified, and went from there. I know it’s a bit naughty but really I was surprised how hard interfacing with the API is when all you need a quick dirty hack finished in 20 minutes But don’t forget I have a commercial consideration and can’t justify spending a day perfecting what could be done in a dirty 20 minute hack. Although I have to say, running eGW+pERP in a virtual machine made this process of trial-and-error much easier, as I could instantly roll-back changes to multiple tables in just a few moments and try again until it worked, safe in the knowledge I wasn’t repeatedly causing compounding damage.
All my import processes update the eGW+pERP tables with vast amounts of data from various legacy systems, and also there’s the consideration that it’s not resource efficient to run through the API without it taking twice as long. The total import process as it stands for my latest client can take around 2.5 hours and that’s not even including the conversion compromises that were made with historical data there’s no value in importing. Additionally you have to bare in mind that it’s a large assumption to make that the legacy systems even support running eGW+pERP. And also you have to bare in mind that often different business processes are usually split over different applications, and different servers. You can’t keep moving the eGW+pERP installation around each system and pull data in bit by bit (again assuming that can even be done on each of the legacy systems). The import scripts are typically programmed in whatever language is suitable on the particular legacy system containing particular data, and it sends this data in a raw format to the import server which glues it all back together and does the actual writing it to the database. This is the most practical way to achieve the end goal that covers nearly all the myriad of existing systems out there.
Regarding our recent posts on the progress being made with pERP, please recall that I mentioned to Justin that one of the requirements for pERP to enter the mainstream is a migration path. You can’t upgrade a client and just hand over an empty system and tell them to enter everything by hand. For the intervention to succeed, the new system has to have all the latest data, and as much historical data as is decided to be of use.
I think we may all have to begrudgingly get used to the fact that the entire subject of migrating from various systems out there to eGW+pERP is always going to be a dirty job that’s only perfected until it seems to the programmer as if it’s working ok. I know it’s not ideal, but the key consideration is “does it work?!” and not “is it perfect?!”
Still, I’m trying to be mindful if I’ve overlooked the same this as the OP here and if you have any other pointers they will be most gratefully received. I am updating egw_api_content_history fortunately.
Kindest thanks,
Paul
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
eGroupWare-developers mailing list
eGroupWare-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/egroupware-developers