Some SWAT users have experienced the input conversion error when running the SWAT model. The video below will show you what causes this error and how to fix it.
If you get error messages like below, you come to the right place.
It means you are trying to write more data than the data table can hold. For example, you will get this message if trying to write string “I’m a long string” to a text field that could be 4 letter long.
It seems ArcSWAT don’t check the length of the data against the maximum length of the field when writing those data tables. Whenever you have a longer string, you will get this error message.
Solution is simple: either make your string shorter or increase the length of those fields. As we don’t want to change our data, the latter is better. But how?
The secret is in SWAT2012.mdb. It has tables with name like ***rng, e.g. solrng and hrurng. These tables defines the structure of those data tables we need in our project database, including the maximum length of a text field. Take table solrng as example as shown below. Each row defines one column in table sol. Amost of all these columns, SOIL is defined as TEXT(4) which means it’s a text column and could have maximum length of 4 letters. The column SOIL, SLOPE_CD and SNAM is similar.
Below is a list of these template tables and the corresponding table in project database. As you could see, most of the tables in project database is generated based on their template tables.
|Template Table in SWAT 2012||Corresponding Data Table in Project Database|
You may have know the solution of the error. Yes, we could increase the length of the text fields by changing their definition, i.e. change TEXT(4) to TEXT(n) where n is a number that big enough to hold all possible strings, say 100. We could usually get which table has problem, like the one we give at the beginning. It says the sol table has problem. So we go to the solrng table and increase the length limitation of all possible text columns and then try to write the data table again. If you don’t want to guess which column should be changed, just change all of them to 100. It should be good enough for most of the case.
Hope you find this helpful and happy SWAT modelling.
SWAT Bug in Reading CHM fileIf you are dealing with more than 6 soil layers, be cautious on the water quality outputs. A bug in SWAT would have impact on the initial N/P in soil and in turn would have impact on water quality in the main channel.
Initial soil N/P could be set in CHM files for up to 10 soil layers. By default, NO3, organic N and organic P is set as 0 mg/kg, which means SWAT would calculate the initial concentration based on organic carbon and depth. Soluble P is set as 5 mg/kg by default.
Soil Layer : 1 2 3 4 5 6 7 8 9 10
Soil NO3 [mg/kg] : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Soil organic N [mg/kg] : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Soil labile P [mg/kg] : 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00 5.00
Soil organic P [mg/kg] : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Phosphorus perc coef : 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00 10.00
Following codes are used to read above values in readchm.f, where mlyr is the maximum number of soil layers.
read (106,5100,iostat=eof) (sol_no3(j,ihru), j = 1, mlyr)
read (106,5100,iostat=eof) (sol_orgn(j,ihru), j = 1, mlyr)
read (106,5100,iostat=eof) (sol_solp(j,ihru), j = 1, mlyr)
read (106,5100,iostat=eof) (sol_orgp(j,ihru), j = 1, mlyr)
read (106,5100,iostat=eof) (pperco_sub(j,ihru), j = 1, mlyr)
5100 format (27x,10f12.2)
The problem is on mlyr. It’s not the actual number of soil layers, which is sol_nly. It’s value could exceed 10 since SWAT adds 4 on top of it in getallo.f shown below.
!! septic change 1-28-09 gsm
mlyr = mlyr + 4
!! septic change 1-28-09 gsm
For my case, the mlyr is 14 and SWAT would try to read 14 values from CHM file for initial soil N/P. As there are only 10 values in each record, SWAT would go to next record to get additional 4 values. Thus, one read function would read 2 data records in CHM file. The values it’s reading are not the values they are set. Organic N would be 5 mg/kg rather than 0 mg/kg. This will cause much less organic N compared to the one calculated from organic carbon.
The solution would be simple: replace mlyr with sol_nly. sol_nly is the actual number of soil layers and has been used in read soil parameters from SOL files (readsol.f). It never exceeds 10.
As a SWAT modeller and programmer, it’s very natural that I want to go deep into the SWAT source codes whenever questions come out. And again, it would be better to run SWAT live and look into values of all the variables to help trouble shooting. To do this, there are two options: commercial Intel fortran compiler and free gfortran. It’s hard to say which one is better. I chose gfortran for its price, freedom, cross-platform and multi-language compiling ability. For some reason, not all SWAT source code files follow the same standard. Compiling SWAT with gfortran is not as easy as Intel fortran. After some research, it comes out the best way to compile SWAT with gfortran. All the necessary steps are documented in one single user guide document with the hope it would be helpful.
All the efforts was recorded in following posts. And here are some important links.
Compile and Debug SWAT with GFortran and Eclipse (Windows version)
February 2, 2014 SWAT Makefile Updated – Ignore Underflow
January 28, 2014 MinGW Installation Guide for SWAT Debugging
January 28, 2014 Debug SWAT in Eclipse – Utilize Makefile
January 24, 2014 SWAT Makefile Updated – Stop running when overflow happens
January 23, 2014 Makefile – Compile SWAT using gfortran without modification
When reservoir outflow is simulated with measured daily/monthly outflow, a discharge file (.day/.mon) would be generated. The file would be overwritten when the simulation is setup with the given simulation period. For those who don’t use ArcSWAT/SWAT Editor to run the model, remember to setup simulation before running the model if there are reservoir or point source in your watershed and the daily/monthly discharge method is used.
The data flow of reservoir discharge data in ArcSWAT/SWAT Editor is described below. Point source discharge data (.DAT) also follows exactly the same data flow.
1. Prepare the original discharge data in dbf/txt format
Note: The comma in the column name is a must. You may expect an error message shown below if they are missing.
2. Import the discharge data in ArcSWAT/SWAT Editor. The discharge data would be imported to timeseries data in project mdb. Note that TSTypeID is 0 and the time is same as the original discharge file. The data here should cover the future simulation period.
3. Re-write .Res/.Lwq, which would re-write all RES file and possible MON/DAY file. Note that the data in the generated DAY file doesn’t started from Jan 1, 1990 (in which day the discharge is 0.07 m3/s) as the simulation period hasn’t been set. ArcSWAT/SWAT Editor may set an arbitrary starting and ending date here to make the time range big enough (like from 1/1/1000 to 1/1/3001). This is not the final version used in simulation.
Daily Reservoir Outflow file: .day file Subbasin:40 9/25/2014 12:00:00 AM ArcSWAT 2012.10_1.15
4. Setup Simulation, the MON/DAY file would be overwritten again to extract discharge data fell between the starting date and ending date from timeseries datatable. Now the data in DAY file starts from Jan 1, 1990.
Daily Reservoir Outflow file: .day file Subbasin:40 9/25/2014 12:00:00 AM ArcSWAT 2012.10_1.15
A default CN2 is defined in mgt file as shown in following examples.
49.00 | CN2: Initial SCS CN II value
Question 1: Where does this default CN2 come from?
The default CN2 comes from either crop table or urban table of SWAT2012.mdb. Four columns (CN2A, CN2B, CN2C, CN2D) are defined in table crop/urban, which are CN2 values for soils with hydrological group A, B, C and D respectively. HRU is a combination of unique landuse, soil and slope. From the soil type, the hydrological group is obtained from usersoil table. And then depending on landuse type, the CN2 value is read from table crop or urban.
Urban is quite unique compared to crop. The default CN2 is just for pervious surface. For impervious surface in urban area, URBCN in table urban would be used. Its value is usually 98.
Question 2: What’s the impact of the default CN2 on model results?
The CN2 in mgt file is the initial CN2 for the HRU. If the CNOP is NOT defined in plant/harvest/tillage operations, the curve number used in infiltration calculation would just depends on the default CN2 and soil moisture. In this case, the default CN2 would have big impact on infiltration and surface runoff and further on flow discharge. That’s why this is usually the main calibrated parameter.
However, the default CN2 could be changed to CNOP if it’s defined in any plant/harvest/tillage operation, where the default CN2 would be only the initial value and would only have impact before it’s changed. In this case, it’s the CNOP we should calibrate rather than CN2.
So, for some scenarios which focus on tillage or crop change, it’s very important to set CNOP.
Question 3: What if the default CN2 is 0?
SWAT model would check the default CN2 and make sure its value is between 35 and 98. See following codes from readmgt.f.
if (cn2(ihru) <= 35.0) cn2(ihru) = 35.0
if (cn2(ihru) >= 98.0) cn2(ihru) = 98.0
Updates July 17,2014: The SWAT Team had confirmed that prf_bsn should be real rather than integer. It would probably be fixed in next release.
SWAT is updated on June 24, 2014 to Rev 627. It gives modelers more control on three parameters (prf, r2adj and surlag) and adds more outputs for auto-irrigation operation in output.mgt. Let’s see the details.
Changes from SWAT Version History
- ‘surlag’ input (.bsn) changed to ‘surlag_bsn’ (if the input for surlag is <= 0 in the .hru file, the model will use surlag_bsn from .bsn file – defaulted to 4.)
- ‘prf’ taken out (.hru) and changed to ‘surlag’ (if the input for prf is <= 0 in the .hru file, the model will use prf_bsn from the .bsn file defaulted to 1.)
Autoirr issues in output.mgt resolved (reporting issue)
- NOTES regarding the autoirr changes: The SCHEDULED irrigation from the .mgt input file will be labelled as: “SCHED AUTOIRR” in the output.mgt file. The actual applications will be labeled in same file as “AUTOIRR”. The ‘scheduled irrigation’ is when it was scheduled in the .mgt rotation. AUTOIRR is when it actually was triggered and applied.
- Subdaily problem with reservoirs fixed
- No significant changes.
- ‘r2adj’ input (.bsn) changed to ‘r2adjbsn’
- ‘prf’ input (.bsn) changed to ‘prf_bsn’
- Minor subdaily changes
- Format extended for MSK_K in input.std output file (from f6.2 to f8.2)
- No significant changes.
More Details from Analysis of Source Codes
Reach peak rate adjustment factor for sediment routing in the channel. Allows impact of peak flow rate on sediment routing and channel reshaping to be taken into account.
In previous version, it’s a basin level parameter defined in .bsn file and each reach share the same parameter value.
Now, it’s a reach level parameter. Each reach could define its own value. It’s added to end of .rte file and the default value is 0, in which case the value given in .bsn value will be used (prf_bsn).
In previous version, the variable for prj read from .bsn is defined as real. In new version, it’s defined as integer. When trying to run new version on model generated from a previous version of ArcSWAT, it may give following errors for trying to read integer value from a real value. Changing the real value to integer (e.g 1.0000 to 1) would solve this problem.
At line 386 of file readbsn.f (unit = 103, file = ‘basins.bsn’)
Fortran runtime error: Bad integer for item 1 in list input
Only gfortran compiled SWAT executables would have this problem. The Intel Fortran compiled version would work without problem. Should this variable be defined as real?
Soil retention parameter adjustment factor (greater than 1)
Similar as prj, the basin level parameter is refined to HRU level. You would more control on this parameter for different HRU. It would be read from HRU file after surlag and default value is 1. The value defined in .bsn file (r2adj_bsn) would be used when the HRU value is less than or equal to 0.
Surface runoff lag time. This parameter is needed in subbasins where the time of concentration is greater than 1 day. SURLAG is used to create a “storage” for surface runoff to allow the runoff to take longer than 1 day to reach the subbasin outlet.
Similar as r2adj, this basin level parameter is refined to a HRU level parameter. It would be read after n_lnco and before r2adj. If it couldn’t be read, it will be set as surlag_bsn defined in .bsn whose default value is 4.
- Auto Irrigation
Add output information for irr_rch.f and irr_res.f to output.mgt. It’s still called “AUTOIRR” rather than “SCHED AUTOIRR” described in SWAT version history.
- ArcSWAT 2012.10.15
- SSURGO Soil Database is used
- Following error message is given when running Create Tables
When generation sol table, ArcSWAT would do following work.
- Get all unique soil IDs from hrus table in project database.
- Create a new table (tbSoilList) in SSURGO database (which is usually located in C:\Swat\ArcSWAT\Databases\SWAT_US_SSURGO_Soils.mdb) and copy all soil IDs into this table.
- Find soil parameter for each soil ID from table SSURGO_Soils.
- Write the soil parameter into sol table.
The error message comes from Step 4. ArcSWAT doesn’t check the case when the soil ID couldn’t be found in SSURGO_Soils table. In that case, the returned soil parameter will be an empty array. When trying to read the first element of this array, it would give an error message shown above.
Check soil shapefile/grid to make sure all soil IDs have been defined in SSURGO_Soils table. And redo the Land Use/Soil/Slope Definition.
Note: This should also work for other soil options (user soil lookup table or STATSGO) and other ArcSWAT version.
After several posts about compiling and debugging SWAT with GFortran and Eclipse, I think it would be good to compile all the posts into an single document as a guide on this topic. So I create a google doc here and would like invite all of you to give some comments.
What happened when the “Import Files to Database” button is clicked?
- Copy SWATOutput.mdb from C:\swat\ArcSWAT\Databases\SWATOutput.mdb to [ArcSWAT Project]\Scenarios\Default\TablesOut. The previous database will be overwritten.
- Copy Schema.ini from C:\swat\ArcSWAT\Databases\Schema.ini to [ArcSWAT Project]\Scenarios\Default\txtinout. This file defines the data columns in text files generated in step 3. More information could be found from MSDN.
- Read SWAT output files (output.rch, output.sub, etc.) and generate corresponding text files (outputsub.txt, outputRch.txt, outputSed.txt, outputHru.txt, outputRsv.txt, outputPst.txt) in [ArcSWAT Project]\Scenarios\Default\txtinout. The result files would be read line by line and string parser is used to extract data values for each column, which would be slightly different for different SWAT version.
- Copy the data from the generated text files to tables in SWATOutput.mdb though “Select INTO” statement.
- Delete all the text files generated in step 3.
Most of the time would be spent at step 3 and step 4, where the data format is converted twice: one from SWAT format to text format and the second from text format to mdb format, where the first conversion would probably cost more.
So, question is why doesn’t SWAT just generate the results in text format required in step 3. Are there any advantage the current result format over text format? What’s no doubt is generating results in text format would greatly save time on this import function.
The result format would also have impact on SWAT_CUP, which would read the specific results from result files after each simulation to calculate the objective functions. From my experience, sometimes the time spent on result reading is even longer than the simulation time!
For SWAT result analysis, I would usually want outputs for a specific element (hru, subbasin or reach, etc.) in a specific time period. It’s a query process and would gain the best performance in a database, e.g. mdb. This may be the thought under the “Import Files to Database” function. It’s also true for SWAT_CUP. So, why not just generate the outputs in a database format directly in SWAT model? Thus, we don’t need to spend extra time to convert the format and it would be a lot easier to do the result analysis, especially for daily outputs.