2 =======================
6 General {#ncFAQGeneral}
9 What Is netCDF? {#What-Is-netCDF}
13 NetCDF (network Common Data Form) is a set of interfaces for
14 array-oriented data access and a [freely](http://www.unidata.ucar.edu/software/netcdf/docs/COPYRIGHT) distributed
15 collection of data access libraries for C, Fortran, C++, Java, and other
16 languages. The netCDF libraries support a machine-independent format for
17 representing scientific data. Together, the interfaces, libraries, and
18 format support the creation, access, and sharing of scientific data.
22 - *Self-Describing*. A netCDF file includes information about the data
24 - *Portable*. A netCDF file can be accessed by computers with
25 different ways of storing integers, characters, and floating-point
27 - *Scalable*. A small subset of a large dataset may be accessed
29 - *Appendable*. Data may be appended to a properly structured netCDF
30 file without copying the dataset or redefining its structure.
31 - *Sharable*. One writer and multiple readers may simultaneously
32 access the same netCDF file.
33 - *Archivable*. Access to all earlier forms of netCDF data will be
34 supported by current and future versions of the software.
36 The netCDF software was developed by Glenn Davis, Russ Rew, Ed Hartnett,
37 John Caron, Dennis Heimbigner, Steve Emmerson, Harvey Davies, and Ward
38 Fisher at the Unidata Program Center in Boulder, Colorado, with
39 [contributions](/netcdf/credits.html) from many other netCDF users.
43 How do I get the netCDF software package? {#HowdoIgetthenetCDFsoftwarepackage}
47 The latest source distribution, which includes the C libraries and
48 utility programs, is available from [the NetCDF Downloads
49 page](/downloads/netcdf/index.jsp). Separate source distributions for
50 the Java library, Fortran libraries, and C++ libraries are also
51 available there. Installation instructions are available with the
52 distribution or [online](http://www.unidata.ucar.edu/software/netcdf/docs/building.html).
54 Binary distributions of netCDF are available for various platforms from
55 package management systems such as dpkg, RPM, fink, MacPorts, Homebrew,
56 OpenCSW, OpenPKG, and the FreeBSD Ports Collection.
60 How do I convert netCDF data to ASCII or text? {#How-do-I-convert-netCDF-data-to-ASCII-or-text}
65 One way to convert netCDF data to text is to use the **ncdump** tool
66 that is part of the netCDF software distribution. It is a command line
67 tool that provides a text representation of a netCDF file's data, just its
68 metadata, or just the data for specified
69 variables, depending on what arguments you use. For more information,
70 see the [ncdump documentation](http://www.unidata.ucar.edu/software/netcdf/docs/ncdump-man-1.html).
72 Another good tool for conversion of netCDF data to text is the ["ncks" program](http://nco.sourceforge.net/nco.html#ncks-netCDF-Kitchen-Sink) that's one of the utility programs in the [NCO (NetCDF Operators)](software.html#NCO) package. Similar capabilities are available using programs from the [CDO (Climate Data Operators)](software.html#CDO) software, commands from [NCL (NCAR Command Language)](software.html#NCL), or various other packages such as [ANAX](http://science.arm.gov/~cflynn/ARM_Tested_Tools/), cdf2asc, and NOESYS, all "third party" netCDF utilities developed and supported by other organizations. You can find more information about these third-party packages on the [Software for Manipulating or Displaying NetCDF Data](software.html) page.
74 You can also get netCDF data in ASCII from an OPeNDAP server by using a
75 ".ascii" extension with the URL that specifies the data. For details,
76 see the OPeNDAP page on [Using a Spreadsheet Application with DODS](http://www.opendap.org/useExcel).
78 Another freely available tool, [netcdf4excel](https://code.google.com/p/netcdf4excel/), has been developed as a netCDF add-in for MS Excel that can facilitate the conversion of netCDF data to and from text form.
80 Note that **ncdump** and similar tools can print metadata and data values
81 from netCDF files, but in general they don't understand coordinate
82 systems specified in the metadata, only variable arrays and their
83 indices. To interpret georeferencing metadata so you can print the data
84 within a latitude/longitude bounding box, for example, you need a higher
85 level tool that interprets conventions for specifying coordinates, such
86 as the CF conventions. Or you can write a small program using one of the
87 language APIs that provide netCDF support, for which [examples are available](http://www.unidata.ucar.edu/software/netcdf/examples/programs/).
91 How do I convert ASCII or text data to netCDF? {#How-do-I-convert-ASCII-or-text-data-to-netCDF}
95 One way to convert data in text form to netCDF is to use the **ncgen**
96 tool that is part of the netCDF software distribution. Using **ncgen** for
97 this purpose is a two-step process:
99 1. Convert text data to a file in [CDL form](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#CDL-Syntax) using a text
100 editor or text manipulation tools
101 2. Convert the CDL representation to netCDF using the **ncgen** tool with
102 the "-o" or "-b" option
104 For more information, see the [ncgen documentation](http://www.unidata.ucar.edu/software/netcdf/docs/ncgen-man-1.html).
106 If you have installed the NCAR Command Language
107 ([NCL](http://www.ncl.ucar.edu/)) software, there are functions
108 available and described
109 [here](http://www.ncl.ucar.edu/Applications/list_io.shtml) and
110 [here](http://www.ncl.ucar.edu/Applications/read_ascii.shtml) for
111 reading ASCII and tables into NCL and writing the data out to netCDF
114 With access to [MATLAB](http://www.mathworks.com/), you can create a
115 schema for the desired netCDF file using
116 [ncwriteschema](http://www.mathworks.com/help/techdoc/ref/ncwriteschema.html),
118 [textscan](http://www.mathworks.com/help/techdoc/ref/textscan.html), and
119 write the data to a netCDF file using
120 [ncwrite](http://www.mathworks.com/help/techdoc/ref/ncwrite.html).
122 What's new in the latest netCDF release?
125 [Release notes](http://www.unidata.ucar.edu/software/netcdf/release-notes-latest.html) for the
126 latest netCDF release are available that describe new features and fixed
127 bugs since the previous release.
131 What is the best way to represent [some particular data] using netCDF? {#What-is-the-best-way-to-represent-some-particular-data-using-netCDF}
134 There are many ways to represent the same information in any
135 general-purpose data model. Choices left up to the user in the case of
136 netCDF include which information to represent as variables or as
137 variable attributes; what names to choose for variables, dimensions, and
138 attributes; what order to use for the dimensions of multidimensional
139 variables; what variables to include in the same netCDF file; and how to
140 use variable attributes to capture the structure and meaning of data. We
141 provide some guidelines in the NetCDF User's Guide (e.g., the section on
142 [Differences between Attributes and Variables](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/Differences-between-Attributes-and-Variables.html#Differences%20between%20Attributes%20and%20Variables))
143 and in a new web document [Writing NetCDF Files: BestPractices](http://www.unidata.ucar.edu/software/netcdf/BestPractices.html), but we've found that
144 a little experience helps. Occasionally we have decided it was useful to
145 change the structure of netCDF files after experience with how the data
150 What convention should be used for the names of netCDF files? {#What-convention-should-be-used-for-the-names-of-netCDF-files}
154 NetCDF files should have the file name extension ".nc". The recommended
155 extension for netCDF files was changed from ".cdf" to ".nc" in 1994 in
156 order to avoid a clash with the NASA CDF file extension, and now it also
157 avoids confusion with "Channel Definition Format" files.
163 Is there a mailing list for netCDF discussions and questions? {#Is-there-a-mailing-list-for-netCDF-discussions-and-questions}
166 The netcdfgroup@unidata.ucar.edu mailing-list is intended for
167 discussions and announcements about netCDF interfaces, software, and
168 use. The volume of this list varies widely, from one message per month
169 to a dozen messages per day (especially after a new release). A message
170 posted to this mailing-list will be seen by several hundred people, so
171 it's usually not appropriate for asking simple questions about use. Such
172 questions should instead be sent to support-netcdf@unidata.ucar.edu.
174 If you would prefer to get only a single daily digest of the postings to
175 the netcdfgroup mailing-list, subscribe instead to the digest form of
176 the mailing-list, containing the same messages but appearing at most
177 once per day instead of whenever anyone sends a message to the group.
179 To subscribe or unsubscribe to either of these mailing lists, use one of
180 these mailing list actions:
182 * [subscribe: non-digest](mailto:netcdfgroup-join@unidata.ucar.edu) ]
183 * [subscribe: digest](mailto:netcdfgroup-request@unidata.ucar.edu?subject=subscribe%0A%20%20%20%20%20%20%20%20%20%20digest)
185 * [change subscription options](http://mailman.unidata.ucar.edu/mailman/options/netcdfgroup)
186 * [view posts](/mailing_lists/archives/netcdfgroup/)
187 * [search archives](/search.jsp).
191 Where are some examples of netCDF datasets? {#Where-are-some-examples-of-netCDF-datasets}
194 Here are some [example netCDF files](http://www.unidata.ucar.edu/software/netcdf/examples/files.html).
198 What is the best way to handle time using netCDF? {#What-is-the-best-way-to-handle-time-using-netCDF}
202 Discussions of conventions for representing time and handling
203 time-dependent data have been a past topic of discussion on the
204 netcdfgroup mailing list. When the subject comes up, interesting
205 discussions often result, so we've archived past discussions on this
207 [http://www.unidata.ucar.edu/software/netcdf/time/](http://www.unidata.ucar.edu/software/netcdf/time/).
209 A summary of Unidata's recommendations is available from
210 [http://www.unidata.ucar.edu/software/netcdf/time/recs.html](http://www.unidata.ucar.edu/software/netcdf/time/recs.html).
211 Briefly, we recommend use of the units conventions supported by the
212 [udunits library](/software/udunits/) for time and other units
215 Other groups have established more specific conventions that include the
216 representation of time in netCDF files. For more information on such
217 conventions, see the NetCDF Conventions Page at
218 [http://www.unidata.ucar.edu/software/netcdf/conventions.html](http://www.unidata.ucar.edu/software/netcdf/conventions.html).
222 Who else uses netCDF? {#Who-else-uses-netCDF}
225 The netCDF mailing list has over 500 addresses (some of which are
226 aliases to more addresses) in thirty countries. Several groups have
227 [adopted netCDF as a standard](http://www.unidata.ucar.edu/software/netcdf/docs/standards.html) for
228 representing some forms of scientific data.
230 A somewhat dated description of some of the projects and groups that
231 have used netCDF is available from
232 [http://www.unidata.ucar.edu/software/netcdf/usage.html](http://www.unidata.ucar.edu/software/netcdf/usage.html).
236 What are some references to netCDF? {#What-are-some-references-to-netCDF}
239 A primary reference is the User's Guide:
241 Rew, R. K., G. P. Davis, S. Emmerson, and H. Davies, **NetCDF User's
242 Guide for C, An Interface for Data Access, Version 3**, April 1997.
244 To cite use of netCDF software, please use this Digital Object Identifier (DOI):
245 [http://dx.doi.org/10.5065/D6H70CW6](http://dx.doi.org/10.5065/D6H70CW6)
247 Current online and downloadable documentation is available from the
248 [documentation directory](http://www.unidata.ucar.edu/software/netcdf/docs/).
250 Other references include:
252 Brown, S. A, M. Folk, G. Goucher, and R. Rew, "Software for Portable
253 Scientific Data Management," Computers in Physics, American Institute of
254 Physics, Vol. 7, No. 3, May/June 1993, pp. 304-308.
256 Fulker, D. W., "Unidata Strawman for Storing Earth-Referencing Data,"
257 Seventh International Conference on Interactive Information and
258 Processing Systems for Meteorology, Oceanography, and Hydrology, New
259 Orleans, La., American Meteorology Society, January 1991.
261 Jenter, H. L. and R. P. Signell, 1992. "[NetCDF: A Freely-Available Software-Solution to Data-Access Problems for Numerical Modelers](http://www.unidata.ucar.edu/software/netcdf/papers/jenter_signell_92.pdf)". Proceedings
262 of the American Society of Civil Engineers Conference on Estuarine and
263 Coastal Modeling. Tampa, Florida.
265 Kuehn, J.A., "Faster Libraries for Creating Network-Portable
266 Self-Describing Datasets", Proceedings of the 37th Cray User Group
267 Meeting, (Barcelona, Spain, March 1996), Cray User Group, Inc.
269 Rew, R. K. and G. P. Davis, "NetCDF: An Interface for Scientific Data
270 Access," IEEE Computer Graphics and Applications, Vol. 10, No. 4, pp.
273 Rew, R. K. and G. P. Davis, "The Unidata netCDF: Software for Scientific
274 Data Access," Sixth International Conference on Interactive Information
275 and Processing Systems for Meteorology, Oceanography, and Hydrology,
276 Anaheim, California, American Meteorology Society, pp. 33-40, February
279 Rew, R. K. and G. P. Davis, " [Unidata's netCDF Interface for Data Access: Status and Plans](/netcdf/ams97.html)," Thirteenth International Conference on Interactive Information and Processing Systems for Meteorology, Oceanography, and Hydrology, Anaheim, California, American Meteorology Society, February 1997.
283 Is there a document describing the actual physical format for a Unidata netCDF file? {#Is-there-a-document-describing-the-actual-physical-format-for-a-Unidata-netCDF-file}
286 A short document that specifies the [format of netCDF classic and 64-bit offset files](http://earthdata.nasa.gov/sites/default/files/esdswg/spg/rfc/esds-rfc-011/ESDS-RFC-011v2.00.pdf) has been approved as a standard by the NASA ESDS Software Process Group.
288 In addition, the NetCDF User's Guide contains an
289 [appendix](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#File-Format) with the same format specification.
291 The ["NetCDF File Structure and Performance"](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#Structure) chapter provides a less formal explanation of the format of netCDF data to help clarify the performance implications of different data organizations.
293 If users only access netCDF data through the documented interfaces, future changes to the format will be transparent.
297 Installation and Porting {#Installation-and-Porting}
300 What does netCDF run on? {#What-does-netCDF-run-on}
303 We test releases on the following operating systems with various compilers:
311 - Windows (some versions, see below)
313 The [NetCDF Installation and Porting Guide](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-install/index.html) explains how to build netCDF from source on various platforms. Often, it's as easy as running
323 How can I use current versions of netCDF-4 with Windows? {#HowcanIusecu}
327 See [http://www.unidata.ucar.edu/software/netcdf/docs/winbin.html](http://www.unidata.ucar.edu/software/netcdf/win_netcdf).
329 How can I use netCDF-4.1 with Windows? {#HowcanIusenetCDF41withWindows}
333 We recently (Summer of 2010) refactored the core building of the netCDF
334 library. Unfortunately this hopelessly broke the existing port to
335 Microsoft Visual Studio. Resources permitting, the development of a new
336 Visual Studio port will be undertaken in the second half of 2010 at
337 Unidata. Until then, no Visual Studio port of the latest version of the
338 library is available.
340 Users are advised that the netCDF build is known to work with Cygwin,
341 the free POSIX layer for Windows. Building netCDF with Cygwin, and
342 including the netCDF, HDF5, zlib, and Cygwin DLLs, will allow you to
343 access the netCDF C library on Windows, even from Visual Studio builds.
345 We understand that Windows users are most comfortable with a Visual
346 Studio build, and we intend to provide one.
348 The Visual Studio port is complicated by the following factors:
350 - No configure script support on windows - the Unix build system uses
351 a configure script to determine details of the build platform and
352 allow the user to specify settings. Windows has no mechanism for
353 this other than statically set properties. A Windows-only config.h
354 file needs to be created for windows using Cygwin, then included
355 with the distribution. Since this contains the version string, it
356 must be updated "by hand" before each release.
357 - No m4 on windows - the Unix build uses the macro language m4 to
358 generate some of the C code in the netCDF library (for example,
359 libsrc/putget.c). M4 must be run under Cygwin to generate these
360 files, and then they must be statically added to the windows
361 distribution. Each new version of netCDF these files should be
362 checked for changes. We are restricting new use of m4 for netCDF
363 compiles, but that doesn't help with the existing files.
364 - No user options on Windows - since Windows does not support a
365 configure step, all user options must be pre-set in the Visual
366 Studio property lists. As a simplification, many options available
367 to Unix users will be unavailable to builders on Windows, such as
368 --disable-dap, --disable-netcdf-4, and --disable-shared.
369 - Large files (> 2 GB) have proved to be a problem area in past
371 - Previous Windows ports have not had to deal with the new OPeNDAP
374 Unidata is a community supported organization, and we welcome
375 collaboration with users who would like to assist with the windows port.
376 Users should be sure to start with the netCDF daily snapshot, not a
377 previous release of netCDF.
379 NOTE: [Paratools](http://www.paratools.com/) has contributed
380 [instructions for how to build netCDF-4.1.3](http://www.paratools.com/Azure/NetCDF) as a Windows DLL using the MinGW cross compiler.
382 Nikolay Khabarov has contributed [documentation describing a netCDF-4.1.3 port](http://user.iiasa.ac.at/~khabarov/netcdf-win64-and-win32-mingw/) using MinGW to build native Windows 64-bit and 32-bit DLLs. Current limitations include leaving out support for Fortran and C++ interfaces, NetCDF-4, HDF5, the old version 2 API, and DAP access. The netCDF classic format and 64-bit offset format are fully supported. Links are provided to compiled 32-bit and 64-bit DLLs and static libraries.
384 A developer on the GMT Wiki has posted [detailed instructions for using CMake](http://gmtrac.soest.hawaii.edu/projects/gmt/wiki/BuildingNetCDF) and MS Visual C++ on Windows to build netCDF-4.1.3, including OPeNDAP support.
386 Another developer has contributed an unsupported native Windows build of
387 netCDF-4.1.3 with 32- and 64-bit versions, Fortran bindings, and OPeNDAP
388 support. The announcement of the availability of that port is
389 [here](http://www.unidata.ucar.edu/mailing_lists/archives/netcdfgroup/2011/msg00363.html).
391 User Veit Eitner has contributed a port of 4.1.1 to Visual Studio,
392 including an F90 port to Intel Fortran. Download [source (ftp://ftp.unidata.ucar.edu/pub/netcdf/contrib/win32/netcdf-4.1.1-win32-src.zip)](ftp://ftp.unidata.ucar.edu/pub/netcdf/contrib/win32/netcdf-4.1.1-win32-src.zip) or [binary](ftp://ftp.unidata.ucar.edu/pub/netcdf/contrib/win32/netcdf-4.1.1-win32-bin.zip) versions. This port was done before the code was refactored in 4.1.2.
394 How can I use netCDF-4 with Windows? {#How-can-I-use-netCDF-4-with-Windows}
398 Note that we have not ported the F90 or C++ APIs to the Windows
399 platform, only the C and F77 APIs. User contributions of ports to F90
400 windows compilers are very welcome (send them to
401 support-netcdf@unidata.ucar.edu).
403 On windows, NetCDF consists of a DLL and the ncgen/ncdump executables.
404 The easiest course is to download one of the pre-built DLLs and
405 utilities and just install them on your system.
407 Unlike Unix builds, the Visual Studio build **always** requires HDF5,
408 zlib, and szlib in all cases. All Windows DLL users must also have the
409 HDF5, zlib, and szlib DLLs. These are now available from the Unidata FTP
412 - [zlib DLLs for 32-bit Windows](ftp://ftp.unidata.ucar.edu/pub/netcdf/contrib/win32/zlib123-vs2005.zip)
413 - [szlib DLLs for 32-bit Windows](ftp://ftp.unidata.ucar.edu/pub/netcdf/contrib/win32/szip21-vs6-enc.zip)
414 - [HDF5 DLLs for 32-bit Windows](ftp://ftp.unidata.ucar.edu/pub/netcdf/contrib/win32/5-181-win-vs2005.zip)
416 Two versions of the netCDF DLLs are available, for different Fortran
419 - [NetCDF for Intel and Portland Group Fortran compilers.](ftp://ftp.unidata.ucar.edu/pub/netcdf/contrib/win32/win32_vs_PGI_dll_4.0.1.zip)
420 - [NetCDF for other Fortran compilers.](ftp://ftp.unidata.ucar.edu/pub/netcdf/contrib/win32/win32_vs_f2c_dll_4.0.1.zip)
422 To use netCDF, install the DLLs in /system/win32 and the .h files in a
423 directory known to your compiler, and define the DLL\_NETCDF
424 preprocessor macro before including netcdf.h.
426 The netCDF-4 library can also be built using Visual Studio 2008. Open
427 the solution file win32/NET/netcdf.sln.
429 If you install the header files in \\include directory, the netCDF
430 solution file will work without modifications. Otherwise the properties
431 of the netcdf project must be changed to include the proper header
434 Both the debug and release builds work. The release build links to
435 different system libraries on Windows, and will not allow debuggers to
436 step into netCDF library code. This is the build most users will be
437 interested in. The debug build is probably of interest only to netCDF
440 As of version 4.0.1 (March 2009), the DLL build does not yet include any
441 testing of the extended netCDF-4 data model. The netCDF4/HDF5 format is
442 extensively tested in the classic model, but tests for groups,
443 user-defined types, and other features of the expanded netCDF-4 data
444 model have not yet been ported to Windows.
446 The [NetCDF Installation and Porting Guide](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-install/index.html) documents how to
447 use netCDF with Windows.
449 Some users have built and released netCDF with Intel Fortran on Windows.
450 See the [ifort entry in other builds document](http://www.unidata.ucar.edu/software/netcdf/docs/other-builds.html#ifort-361-windows).
452 Windows is a complicated platform to build on. Some useful explanations
453 of the oddities of Windows can be found here:
455 - Cygwin documentation for [Building and Using DLLs](http://cygwin.com/cygwin-ug-net/dll.html)
456 - [OpenLDAP FAQ answer: MinGW Support in Cygwin](http://www.openldap.org/faq/data/cache/301.html), by Jon
458 - [cygwin mailing list explanation of Windows DL requirements.](http://cygwin.com/ml/cygwin/2000-06/msg00688.html)
459 - [-mno-cygwin - Building Mingw executables using Cygwin](http://www.delorie.com/howto/cygwin/mno-cygwin-howto.html)
461 Once you have the netCDF DLL, you may wish to call it from Visual Basic.
462 The [netCDF VB wrapper](ftp://ftp.unidata.ucar.edu/pub/netcdf/contrib/win32/netcdf_vb_net_wrapper.zip) will help you do this.
464 The SDS ([Scientific DataSet](http://research.microsoft.com/en-us/projects/sds/)) library and tools provide .Net developers a way to read, write and share scalars, vectors, and multidimensional grids using CSV, netCDF, and other file formats. It currently uses netCDF version 4.0.1. In addition to .Net libraries, SDS provides a set of utilities and packages: an sds command line utility, a DataSet Viewer application and an add-in for Microsoft Excel 2007 (and later versions).
468 How do I build and install netCDF for a specific development environment? {#How-do-I-build-and-install-netCDF-for-a-specific-development-environment}
471 You have to build and install the netCDF C library first, before you build and install other language libraries that depend on it, such as Fortran, C++, or Python netCDF libraries. The netCDF Java library is mostly independent of the netCDF C library, unless you need to write netCDF-4 files from Java, in which case you will also need an installed netCDF C library.
473 For more details, see
474 [Getting and Building netCDF](http://www.unidata.ucar.edu/software/netcdf/docs/getting_and_building_netcdf.html).
479 How can I tell if I successfully built and installed netCDF? {#How-can-I-tell-if-I-successfully-built-and-installed-netCDF}
483 We make build output from various platforms [available](../builds) for
484 comparison with your output. In general, you can ignore compiler
485 warnings if the "make test" step is successful. Lines that begin with
486 "\*\*\*" in the "make test" output indicate results from tests. The C
487 and Fortran-77 interfaces are tested extensively, but only rudimentary
488 tests are currently used for the C++ and Fortran-90 interfaces.
490 How can I tell what version I'm using? {#How-can-I-tell-what-version-Im-using}
500 the last line of the resulting output will identify the version
501 associated with the **ncdump** utility. You can also call one of the
502 functions `nc_inq_libvers()`, `nf_inq_libvers()`, or
503 `nf90_inq_libvers()` from C, Fortran-77, or Fortran-90 programs to get a
508 Where does netCDF get installed? {#Where-does-netCDF-get-installed}
512 The netCDF installation directory can be set at the time configure is
513 run using the --prefix argument. If it is not specified, /usr/local is
514 used as the default prefix.
516 For more information see the [NetCDF Installation and Porting Guide](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-install).
518 Formats, Data Models, and Software Releases {#formatsdatamodelssoftwarereleases}
519 ===========================================
521 In different contexts, "netCDF" may refer to a data model, a software
522 implementation with associated application program interfaces (APIs), or
523 a data format. Confusion may arise in discussions of different versions
524 of the data models, software, and formats. For example, compatibility
525 commitments require that new versions of the software support all
526 previous versions of the format and data model. This section of FAQs is
527 intended to clarify netCDF versions and help users determine what
528 version to build and install.
530 How many netCDF formats are there, and what are the differences among them? {#How-many-netCDF-formats-are-there-and-what-are-the-differences-among-them}
534 There are four netCDF format variants:
537 - the 64-bit offset format
538 - the netCDF-4 format
539 - the netCDF-4 classic model format
541 (In addition, there are two textual representations for netCDF data,
542 though these are not usually thought of as formats: CDL and NcML.)
544 The **classic format** was the only format for netCDF data created
545 between 1989 and 2004 by the reference software from Unidata. It is
546 still the default format for new netCDF data files, and the form in
547 which most netCDF data is stored.
549 In 2004, the **64-bit offset format** variant was added. Nearly
550 identical to netCDF classic format, it allows users to create and access
551 far larger datasets than were possible with the original format. (A
552 64-bit platform is not required to write or read 64-bit offset netCDF
555 In 2008, the **netCDF-4 format** was added to support per-variable
556 compression, multiple unlimited dimensions, more complex data types, and
557 better performance, by layering an enhanced netCDF access interface on
558 top of the HDF5 format.
560 At the same time, a fourth format variant, **netCDF-4 classic model
561 format**, was added for users who needed the performance benefits of the
562 new format (such as compression) without the complexity of a new
563 programming interface or enhanced data model.
565 With each additional format variant, the C-based reference software from
566 Unidata has continued to support access to data stored in previous
567 formats transparently, and to also support programs written using
568 previous programming interfaces.
570 Although strictly speaking, there is no single "netCDF-3 format", that
571 phrase is sometimes used instead of the more cumbersome but correct
572 "netCDF classic or 64-bit offset format" to describe files created by
573 the netCDF-3 (or netCDF-1 or netCDF-2) libraries. Similarly "netCDF-4
574 format" is sometimes used informally to mean "either the general
575 netCDF-4 format or the restricted netCDF-4 classic model format". We
576 will use these shorter phrases in FAQs below when no confusion is
579 A more extensive description of the netCDF formats and a formal
580 specification of the classic and 64-bit formats is available as a [NASA ESDS community standard](https://earthdata.nasa.gov/sites/default/files/esdswg/spg/rfc/esds-rfc-011/ESDS-RFC-011v2.00.pdf).
582 How can I tell which format a netCDF file uses? {#How-can-I-tell-which-format-a-netCDF-file-uses}
586 The short answer is that under most circumstances, you should not care,
587 if you use version 4.0 or later of the netCDF library to access data in
588 the file. But the difference is indicated in the first four bytes of the
589 file, which are 'C', 'D', 'F', '\\001' for the classic netCDF format;
590 'C', 'D', 'F', '\\002' for the 64-bit offset format; or '\\211', 'H',
591 'D', 'F' for an HDF5 file, which could be either a netCDF-4 file or a
592 netCDF-4 classic model file. (HDF5 files may also begin with a
593 user-block of 512, 1024, 2048, ... bytes before what is actually an
594 8-byte signature beginning with the 4 bytes above.)
596 With netCDF version 4.0 or later, there is an easy way that will
597 distinguish between netCDF-4 and netCDF-4 classic model files, using the
598 "-k" option to **ncdump** to determine the kind of file, for example:
606 In a program, you can call the function
607 [nc_inq_format](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-c.html#nc_005finq-Family)(or [nf90_inq_format](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-f90.html#Compiling-and-Linking-with-the-NetCDF-Library) for the Fortran-90 interface) to determine the format variant of an open netCDF file.
609 Finally, on a Unix system, one way to display the first four bytes of a
610 file, say foo.nc, is to run the following command:
630 depending on whether foo.nc is a classic, 64-bit offset, or netCDF-4
631 file, respectively. This method cannot be used to distinguish between
632 netCDF-4 and netCDF-4 classic model variants, or between a netCDF-4 file
633 and a different kind of HDF5 file.
637 How many netCDF data models are there? {#How-many-netCDF-data-models-are-there}
640 There are only two netCDF data models, the [classic model](/netcdf/workshops/2008/datamodel/NcClassicModel.html) and the [enhanced model](/netcdf/workshops/2008/netcdf4/Nc4DataModel.html) (also called the netCDF-4 data model). The classic model is the simpler of the two, and is used for all data stored in classic format, 64-bit offset format, or netCDF-4 classic model format. The enhanced model (sometimes also referred to as the netCDF-4 data model) is an extension of the classic model that adds more powerful forms of data representation and
641 data types at the expense of some additional complexity. Although data represented with the classic model can also be represented using the enhanced model, datasets that use enhanced model features, such as user-defined data types, cannot be represented with the classic model. Use of the enhanced model requires storage in the netCDF-4 format.
643 How many releases of the C-based netCDF software are supported? {#How-many-releases-of-the-C-based-netCDF-software-are-supported}
647 When netCDF version 4.0 was released in June 2008, version 3.6.3 was
648 released simultaneously, and both releases were supported by Unidata.
649 Version 3.6.3 supported only the classic and 64-bit offset formats.
650 Version 4.0 supported both of those format variants by default, and also
651 the netCDF-4 and netCDF-4 classic model formats, if built using a
652 previously installed HDF5 library and using the "--enable-netcdf-4"
653 configure option. Software built from the netCDF-4.0 release without
654 specifying "--enable-netcdf-4" (the default) was identical to software
655 built with netCDF-3.6.3.
657 Both netCDF-3 and netCDF-4 C libraries are part of a single software
658 release. The netCDF software may be built to support just the classic
659 and 64-bit offset formats (the default) or to also support the netCDF-4
660 and netCDF-4 classic model formats, if the HDF5-1.8.x library is
661 installed. Unidata no longer supports a separate netCDF-3-only version
662 of the software, but instead supports both the classic and enhanced data
663 models and all four format variants in a single source distribution.
665 This does not indicate any plan to drop support for netCDF-3 or the
666 formats associated with netCDF-3. Support for earlier formats and APIs
667 will continue with all future versions of netCDF software from Unidata.
669 Should I get netCDF-3 or netCDF-4? {#Should-I-get-netCDF-3-or-netCDF-4}
673 By downloading a current version of netCDF-4, you have the choice to
676 - the default netCDF-3 libraries, which support classic and 64-bit
677 offset formats, and the classic data model; or
678 - the netCDF-4 libraries, which support netCDF-4 and netCDF-4 classic
679 model formats, as well as classic and 64-bit offset formats, and the
682 Which version to build depends on how you will use the software.
684 Installing the simpler netCDF-3 version of the software is recommended
685 if the following situations apply:
687 - all the data you need to access is available in netCDF classic or
688 64-bit offset formats
689 - you are installing netCDF in order to support another software
690 package that uses only netCDF-3 features
691 - you plan to only write data in a form that netCDF-3 software and
692 applications can access
693 - you want to delay upgrading to support netCDF-4 until netCDF-4
694 formats are more widely used
695 - you cannot install the prerequisite HDF5 1.8 software required to
696 build and install netCDF-4
698 Installing the netCDF-4 version of the software is required for any of
699 the following situations:
701 - you need to access netCDF data that makes use of netCDF-4
702 compression or chunking
703 - you need to access data in all netCDF formats including netCDF-4 or
704 netCDF-4 classic model formats
705 - you need to write non-record variables larger than 4GiB or record variables with more than 4GiB per record (see ["Have all netCDF size limits been eliminated?"](http://www.unidata.ucar.edu/software/netcdf/docs/faq.html#Large%20File%20Support10))
706 - you are installing netCDF to support other software packages that
707 require netCDF-4 features
708 - you want to write data that takes advantage of compression,
709 chunking, or other netCDF-4 features
710 - you want to be able to read netCDF-4 classic model data with no
711 changes to your current software except relinking with the new
713 - you want to benchmark your current applications with the new
714 libraries to determine whether the benefits are significant enough
715 to justify the upgrade
716 - you need to use parallel I/O with netCDF-4 or netCDF-4 classic files
718 What is the "enhanced data model" of netCDF-4, and how does it differ from the netCDF-3 classic data model? {#whatisenhanceddatamodel}
722 The enhanced model (sometimes referred to as the netCDF-4 data model) is
723 an extension to the [classic model](/netcdf/workshops/2008/datamodel/NcClassicModel.html) that adds more powerful forms of data representation and data types at the expense of some additional complexity. Specifically, it adds six new primitive data types, four kinds of user-defined data types, multiple unlimited
724 dimensions, and groups to organize data hierarchically and provide
725 scopes for names. A [picture](/netcdf/workshops/2008/netcdf4/Nc4DataModel.html) of the enhanced data model, with the extensions to the classic model
726 highlighted in red, is available from the online netCDF workshop.
728 Although data represented with the classic model can also be represented
729 using the enhanced model, datasets that use features of the enhanced
730 model, such as user-defined data types, cannot be represented with the
731 classic model. Use of added features of the enhanced model requires that
732 data be stored in the netCDF-4 format.
734 Why doesn't the new netCDF-4 installation I built seem to support any of the new features? {#Whydoesnt-the-new-netCDF-4-installation-I-built-seem-to-support-any-of-the-new-features}
738 If you built the software from source without access to an HDF5 library,
739 then only the netCDF-3 library was built and installed. The current
740 release will build full netCDF-4 support if the HDF5 1.8.x library is
741 already installed where it can be found by the configure script or
744 Will Unidata continue to support netCDF-3? {#Will-Unidata-continue-to-support-netCDF-3}
748 Yes, Unidata has a commitment to preserving backward compatibility.
750 Because preserving access to archived data for future generations is
753 - New netCDF software will provide read and write access to *all*
754 earlier forms of netCDF data.
755 - C and Fortran programs using documented netCDF APIs from previous
756 releases will be supported by new netCDF software (after recompiling
757 and relinking, if needed).
758 - Future releases of netCDF software will continue to support data
759 access and API compatibility.
761 To read compressed data, what changes do I need to make to my netCDF-3 program? {#To-read-compressed-data-what-changes-do-I-need-to-make-to-my-netCDF-3-program}
765 None. No changes to the program source are needed, because the library
766 handles decompressing data as it is accessed. All you need to do is
767 relink your netCDF-3 program to the netCDF-4 library to recognize and
768 handle compressed data.
770 To write compressed data, what changes do I need to make to my netCDF-3 program? {#To-write-compressed-data-what-changes-do-I-need-to-make-to-my-netCDF-3-program}
774 The **nccopy** utility in versions 4.1.2 and later supports a "-d *level*"
775 deflate option that copies a netCDF file, compressing all variables
776 using the specified level of deflation and default chunking parameters,
777 or you can specify chunking with the "-c" option.
779 To do this within a program, or if you want different variables to have
780 different levels of deflation, define compression properties when each
781 variable is defined. The function to call is
782 [nc_def_var_deflate](/netcdf-c.html#nc_005fdef_005fvar_005fdeflate)
783 for C programs, [nf90_def_var_deflate](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-f90.html#NF90_005fDEF_005fVAR_005fDEFLATE) for Fortran 90 programs, [NF_DEF_VAR_DEFLATE](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-f77.html#NF_005fDEF_005fVAR_005fDEFLATE) for Fortran 77. For C++ programs, the experimental cxx4 API may be used,
784 assuming you have configured with --enable-cxx-4.
786 Although default variable chunking parameters may be adequate,
787 compression can sometimes be improved by choosing good chunking
788 parameters when a variable is first defined. For example, if a 3D field
789 tends to vary a lot with vertical level, but not so much within a
790 horizontal slice corresponding to a single level, then defining chunks
791 to be all or part of a horizontal slice would typically produce better
792 compression than chunks that included multiple horizontal slices. There
793 are other factors in choosing chunk sizes, especially matching how the
794 data will be accessed most frequently. Chunking properties may only be
795 specified when a variable is first defined. The function to call is
796 [nc_def_var_chunking](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-c.html#nc_005fdef_005fvar_005f)
798 [nf90_def_var_chunking](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-f90.html#NF90_005fDEF_005fVAR_005fCHUNKING)
799 for Fortran 90 programs, and
800 [NF_DEF_VAR_CHUNKING](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-f77.html#NF_005fDEF_005fVAR_005fCHUNKING)
801 for Fortran 77 programs. For C++ programs, the experimental cxx4 API may
802 be used, assuming you have configured with --enable-cxx-4.
804 If I create netCDF-4 classic model files, can they be read by IDL, MATLAB, R, Python and ArcGIS? {#If-I-create-netCDF-4-classic-model-files-can-they-be-read-by-IDL-MATLAB-R-Python-and-ArcGIS}
808 IDL 8.0 ships with support for netCDF-4, including support for OPeNDAP
811 MATLAB 2012a includes netCDF 4 support with OPeNDAP support turned on,
812 enabling remote access to many kinds of data, as well as use of groups,
813 compression, and chunking. An example is available demonstrating some of
814 the new functions. [NCTOOLBOX](http://nctoolbox.github.io/nctoolbox/),
815 uses netCDF-Java to provide read access to datasets in netCDF-4, GRIB,
816 GRIB2 and other formats through Unidata's Common Data Model.
818 R has the [ncdf4 package](http://cirrus.ucsd.edu/~pierce/ncdf/).
820 Python has the [netcdf4-python package](http://code.google.com/p/netcdf4-python/).
822 ArcGIS 10.0 can read netcdf4 using the Multidimensional Tools in
823 ArcToolbox, and in ArcGIS 10.1, the [Multidimensional Supplemental toolbox](http://esriurl.com/MultidimensionSupplementalTools) uses NetCDF4-Python to read OPeNDAP and netCDF4 files, taking advantage of CF conventions if they exist.
825 What applications are able to deal with *arbitrary* netCDF-4 files? {#What-applications-are-able-to-deal-with-arbitrary-netCDF-4-files}
828 The netCDF utilities **ncdump**, **ncgen**, and **nccopy**, available in
829 the Unidata C-based netCDF-4 distribution, are able to deal with
830 arbitrary netCDF-4 files (as well as all other kinds of netCDF files).
832 How can I convert netCDF-3 files into netCDF-4 files? {#How-can-I-convert-netCDF-3-files-into-netCDF-4-files}
836 Every netCDF-3 file can be read or written by a netCDF version 4
837 library, so in that respect netCDF-3 files are already netCDF-4 files
838 and need no conversion. But if you want to convert a classic or 64-bit
839 offset format file into a netCDF-4 format or netCDF-4 classic model
840 format file, the easiest way is to use the **nccopy** utility. For example
841 to convert a classic format file foo3.nc to a netCDF-4 format file
844 ~~~~~~~~~~~~~~~~~~~~~~~~~ {.boldcode}
845 nccopy -k netCDF-4 foo3.nc foo4.nc
846 ~~~~~~~~~~~~~~~~~~~~~~~~~
848 To convert a classic format file foo3.nc to a netCDF-4 classic
849 model format file foo4c.nc, you could use:
851 ~~~~~~~~~~~~~~~~~~~~~~~~~~ {.boldcode}
852 nccopy -k netCDF-4-classic foo3.nc foo4c.nc
853 ~~~~~~~~~~~~~~~~~~~~~~~~~~
855 If you have installed [NCO](http://www.unidata.ucar.edu/software/netcdf/docs/software.html#NCO), the NCO
856 utility "ncks" can be used to accomplish the same task, as follows:
858 ~~~~~~~~~~~~~~~~~~~~~~~~ {.boldcode}
859 ncks -7 foo3.nc foo4c.nc
860 ~~~~~~~~~~~~~~~~~~~~~~~~
862 Another method is available for relatively small files, using the **ncdump**
863 and **ncgen** utilities (built with a netCDF-4 library). Assuming
864 "small3.nc" is a small classic format or 64-bit offset format netCDF
865 file, you can create an equivalent netCDF-4 file named
866 "small4.nc" as follows:
869 ncdump small3.nc > small.cdl
870 ncgen -o small4.nc -k netCDF-4-classic small.cdl
873 Why might someone want to convert netCDF-4 files into netCDF-3 files? {#Why-might-someone-want-to-convert-netCDF-4-files-into-netCDF-3-files}
877 NetCDF-4 classic model files that use compression can be smaller than
878 the equivalent netCDF-3 files, so downloads are quicker. If they are
879 then unpacked and converted to the equivalent netCDF-3 files, they can
880 be accessed by applications that haven't yet upgraded to netCDF-4.
882 How can I convert netCDF-4 files into netCDF-3 files? {#How-can-I-convert-netCDF-4-files-into-netCDF-3-files}
886 In general, you can't, because netCDF-4 files may have features of the
887 netCDF enhanced data model, such as groups, compound types,
888 variable-length types, or multiple unlimited dimensions, for which no
889 netCDF-3 representation is available. However, if you know that a
890 netCDF-4 file conforms to the classic model, either because it was
891 written as a netCDF-4 classic model file, because the program that wrote
892 it was a netCDF-3 program that was merely relinked to a netCDF-4
893 library, or because no features of the enhanced model were used in
894 writing the file, then there are several ways to convert it to a
897 You can use the **nccopy** utility. For
898 example to convert a netCDF-4 classic-model format file foo4c.nc to a
899 classic format file foo3.nc, use:
901 ~~~~~~~~~~~~~~~~~~~~~~~~~ {.boldcode}
902 nccopy -k classic foo4c.nc foo3.nc
903 ~~~~~~~~~~~~~~~~~~~~~~~~~
905 If you have installed [NCO](http://www.unidata.ucar.edu/software/netcdf/docs/software.html#NCO), the NCO utility "ncks" can be used to accomplish the same task, as follows:
907 ~~~~~~~~~~~~~~~~~~~~~~~~~ {.boldcode}
908 ncks -3 foo4c.nc foo3.nc
909 ~~~~~~~~~~~~~~~~~~~~~~~~~
911 For a relatively small netCDF-4 classic model file, "small4c.nc" for
912 example, you can also use the **ncdump** and **ncgen** utilities to create an
913 equivalent netCDF-3 classic format file named "small3.nc" as follows:
916 ncdump small4c.nc > small4.cdl
917 ncgen -o small3.nc small4.cdl
920 How can I convert HDF5 files into netCDF-4 files? {#How-can-I-convert-HDF5-files-into-netCDF-4-files}
924 NetCDF-4 intentionally supports a simpler data model than HDF5, which
925 means there are HDF5 files that cannot be converted to netCDF-4,
926 including files that make use of features in the following list:
928 - Multidimensional data that doesn't use shared dimensions implemented
929 using HDF5 "dimension scales". (This restriction was eliminated in
930 netCDF 4.1.1, permitting access to HDF5 datasets that don't use
932 - Non-hierarchical organizations of Groups, in which a Group may have
933 multiple parents or may be both an ancestor and a descendant of
934 another Group, creating cycles in the subgroup graph. In the
935 netCDF-4 data model, Groups form a tree with no cycles, so each
936 Group (except the top-level unnamed Group) has a unique parent.
937 - HDF5 "references" which are like pointers to objects and data
938 regions within a file. The netCDF-4 data model does not support
940 - Additional primitive types not included in the netCDF-4 data model,
941 including H5T\_TIME, H5T\_BITFIELD, and user-defined atomic types.
942 - Multiple names for data objects such as variables and groups. The
943 netCDF-4 data model requires that each variable and group have a
944 single distinguished name.
945 - Attributes attached to user-defined types.
946 - Stored property lists
947 - Object names that begin or end with a space
949 If you know that an HDF5 file conforms to the netCDF-4 enhanced data
950 model, either because it was written with netCDF function calls or
951 because it doesn't make use of HDF5 features in the list above, then it
952 can be accessed using netCDF-4, and analyzed, visualized, and
953 manipulated through other applications that can access netCDF-4 files.
955 The [ncks tool](http://nco.sourceforge.net/nco.html#ncks-netCDF-Kitchen-Sink) of the NCO collection of netCDF utilities can take simple HDF5 data as input and produce a netCDF file as output, so this may work:
958 ncks infile.hdf5 outfile.nc
961 Another tool has been developed to convert HDF5-EOS Aura files to
962 netCDF-4 files, and it is currently undergoing testing and documentation
963 before release on the HDF5 web site.
965 How can I convert netCDF-4 files into HDF5 files? {#How-can-I-convert-netCDF-4-files-into-HDF5-files}
969 Every netCDF-4 or netCDF-4 classic model file can be read or written by
970 the HDF5 library, version 1.8 or later, so in that respect netCDF-4
971 files are already HDF5 files and need no conversion.
973 The way netCDF-4 data objects are represented using HDF5 is described in
974 detail in the User Manual section ["C.3 The NetCDF-4 Format"](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#NetCDF_002d4-Format).
976 Why aren't different extensions used for the different formats, for example ".nc3" and ".nc4"? {#why-arent-different-extensions-used}
979 The file extension used for netCDF files is purely a convention. The
980 netCDF libraries don't use the file extension. A user can currently
981 create a netCDF file with any extension, even one not consistent with
982 the format of the file.
984 The **ncgen** utility uses ".nc" as a default extension for output, but this
985 can be overridden using the "-o" option to specify the name for the
986 output file. Recent versions of **ncgen** also have a "-k" option to specify
987 what kind of output file is desired, selecting any of the 4 format
988 variants, using either a numeric code or a text string. Most other
989 netCDF client software pays no attention to the file extension, so using
990 more explicit extensions by convention has no significant drawbacks,
991 except possibly causing confusion about format differences that may not
994 Why is the default of netCDF-4 to continue to create classic files, rather than netCDF-4 files? {#Why-is-the-default-of-netCDF-4-to-continue-to-create-classic-files-rather-than-netCDF-4-files}
998 Until widely used netCDF client software has been adapted or upgraded to
999 read netCDF-4 data, classic file format is the default for
1000 interoperability with most existing netCDF software.
1002 Can netCDF-4 read arbitrary HDF5 files? {#Can-netCDF-4-read-arbitrary-HDF5-files}
1006 No, but it can read many HDF5 files, and more recent versions can access
1007 more HDF5 data. If you want to access HDF5 data through netCDF
1008 interfaces, avoid HDF5 features not included in the netCDF enhanced data
1009 model. For more details see "[How can I convert HDF5 files into netCDF-4 files?](#fv15)", above.
1011 I installed netCDF-3 with --enable-shared, but it looks like the libraries it installed were netCDF-4, with names like libnetcdf.4.dylib. What's going on? {#I-installed-netCDF-3-with---enable-shared-but-it-looks-like-the-libraries-it-installed-were-netCDF-4-with-names-like-libnetcdf4dylib-Whats-going-on}
1015 The number used for the shared library name is not related to the netCDF
1016 library version number.
1018 NetCDF-3.6.3 permits UTF-8 encoded Unicode names. Won't this break backward compatibility with previous software releases that didn't allow such names? {#NetCDF-363-permits-UTF-8-encoded-Unicode-names-Wont-this-break-backward-compatibility-with-previous-software-releases-that-didnt-allow-such-names}
1022 Earlier versions of the netCDF libraries have always been able to read
1023 data with arbitrary characters in names. The restriction has been on
1024 *creating* files with names that contained "invalid" special characters.
1025 The check for characters used in names occurred when a program tried to
1026 define a new variable, dimension, or attribute, and an error would be
1027 returned if the characters in the names didn't follow the rules.
1028 However, there has never been any such check on reading data, so
1029 arbitrary characters have been permitted in names created through a
1030 different implementation of the netCDF APIs, or through early versions
1031 of netCDF software (before 2.4), which allowed arbitrary names.
1033 In other words, the expansion to handle UTF-8 encoded Unicode characters
1034 and special characters such as \`:' and \` ' still conforms with
1035 Unidata's commitment to backwards compatibility. All old files are still
1036 readable and writable by the new software, and programs that used to
1037 work will still work when recompiled and relinked with the new
1038 libraries. Files using new characters in names will still be readable
1039 and writable by programs that used older versions of the libraries.
1040 However, programs linked to older library versions will not be able to
1041 create new data objects with the new less-restrictive names.
1043 How difficult is it to convert my application to handle arbitrary netCDF-4 files? {#How-difficult-is-it-to-convert-my-application-to-handle-arbitrary-netCDF-4-files}
1047 Modifying an application to fully support the new enhanced data model
1048 may be relatively easy or arbitrarily difficult :-), depending on what
1049 your application does and how it is written. Use of recursion is the
1050 easiest way to handle nested groups and nested user-defined types. An
1051 object-oriented architecture is also helpful in dealing with
1054 We recommend proceeding incrementally, supporting features that are
1055 easier to implement first. For example, handling the six new primitive
1056 types is relatively straightforward. After that, using recursion (or the
1057 group iterator interface used in **nccopy**) to support Groups is not too
1058 difficult. Providing support for user-defined types is more of a
1059 challenge, especially since they can be nested.
1061 The utility program **nccopy**, provided in releases 4.1 and later, shows
1062 how this can be done using the C interface. It copies an input netCDF
1063 file in any of the format variants, handling nested groups, strings, and
1064 any user-defined types, including arbitrarily nested compound types,
1065 variable-length types, and data of any valid netCDF-4 type. It also
1066 demonstrates how to handle variables that are too large to fit in memory
1067 by using an iterator interface. Other generic utility programs can make
1068 use of parts of **nccopy** for more complex operations on netCDF data.
1072 Shared Libraries {#Shared-Libraries}
1075 What are shared libraries? {#What-are-shared-libraries}
1079 Shared libraries are libraries that can be shared by multiple running
1080 applications at the same time. This **may** improve performance.
1082 For example, if I have a library that provides function foo(), and I
1083 have two applications that call foo(), then with a shared library, only
1084 one copy of the foo() function will be loaded into memory, and both
1085 programs will use it. With static libraries, each application would have
1086 its own copy of the foo() function.
1088 More information on shared libraries can be found at the following
1091 - [The Program-Library HowTo](http://www.tldp.org/HOWTO/Program-Library-HOWTO/index.html),
1094 - [Wikipedia Library Entry](http://en.wikipedia.org/wiki/Library_(computer_science))
1098 Can I build netCDF with shared libraries? {#Can-I-build-netCDF-with-shared-libraries}
1102 Starting with version 3.6.2, netCDF can build shared libraries on
1103 platforms that support them, but by default netCDF will build static
1104 libraries only. To turn on shared libraries, use the --enable-shared
1105 option to the [netCDF configure script](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-install/Running-the-configure-Script.html).
1109 How do I use netCDF shared libraries? {#How-do-I-use-netCDF-shared-libraries}
1113 With netCDF version 3.6.2, shared libraries can be built on platforms
1114 that support them by using the --enable-shared argument to [netCDF configure script](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-install/Running-the-configure-Script.html).
1116 Users of earlier versions of netCDF can build shared libraries by
1117 setting flags correctly during builds.
1119 When you use a static library, the code is copied from the library into
1120 your program when the program is built. The library is only needed at
1123 With a shared library the code in the library is not copied into your
1124 executable, so the library is needed every time the program is run.
1126 If you write a program that uses the netCDF shared library, the
1127 operating system will have to find it every time your program is run. It
1128 will look in these places:
1130 1. Directories you specified as shared library locations at **build
1131 time**. Unfortunately this is done differently with different
1134 2. Directories specified in the environment variable LD\_RUN\_PATH at
1137 3. Directories specified in the OS-specific environment variable for
1138 this purpose at **run time**. (LD\_LIBRARY\_PATH on Linux and many
1139 other Unix variants, LOADLIBS on AIX systems, etc.)
1141 4. A default list of directories that includes /usr/lib (but don't
1142 install software there!), and may or may not contain places you
1143 might install netCDF, like /usr/local/lib.
1145 5. The directories specified in an OS file such as /etc/ld.conf.
1147 By default the netCDF library will be installed in /usr/local/lib. (This
1148 can be overridden with the --prefix option to the [netCDF configure script](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf-install/Running-the-configure-Script.html)).
1150 An external site by Arnaud Desitter has a [table of different tools and command line options relating to shared libraries](http://www.fortran-2000.com/ArnaudRecipes/sharedlib.html) on Linux, Solaris, HP-UX, Tru64, AIX, SGI, Win32, MacOS X, VMS (wow!), and OS/390.
1152 For more information about how do to this in Linux users may find it
1153 useful to read this external webpage, some documentation from Caldera, a
1154 Linux distributor: [Specifying directories to be searched by the dynamic linker](http://osr507doc.sco.com/en/tools/ccs_linkedit_dynamic_dirsearch.html).
1158 Large File Support {#Large-File-Support}
1161 Was it possible to create netCDF files larger than 2 GiBytes before version 3.6? {#Was-it-possible-to-create-netCDF-files-larger-than-2-GiBytes-before-version-36}
1165 Yes, but there are significant restrictions on the structure of large
1166 netCDF files that result from the 32-bit relative offsets that are part
1167 of the classic netCDF format. For details, see [NetCDF Classic Format Limitations](netcdf/NetCDF-Classic-Format-Limitations.html#NetCDF-Classic-Format-Limitations)
1168 in the User's Guide.
1172 What is Large File Support? {#What-is-Large-File-Support}
1176 Large File Support (LFS) refers to operating system and C library
1177 facilities to support files larger than 2 GiB. On a few 32-bit platforms
1178 the default size of a file offset is still a 4-byte signed integer,
1179 which limits the maximum size of a file to 2 GiB. Using LFS interfaces
1180 and the 64-bit file offset type, the maximum size of a file may be as
1181 large as 2^63^ bytes, or 8 EiB. For some current platforms, large file
1182 macros or appropriate compiler flags have to be set to build a library
1183 with support for large files. This is handled automatically in netCDF
1184 3.6 and later versions.
1186 More information about Large File Support is available from [Adding Large File Support to the Single UNIX Specification](http://www.unix.org/version2/whatsnew/lfs.html).
1190 What does Large File Support have to do with netCDF? {#What-does-Large-File-Support-have-to-do-with-netCDF}
1194 When the netCDF format was created in 1988, 4-byte fields were reserved
1195 for file offsets, specifying where the data for each variable started
1196 relative to the beginning of the file or the start of a record boundary.
1198 This first netCDF format variant, the only format supported in versions
1199 3.5.1 and earlier, is referred to as the netCDF *classic* format. The
1200 32-bit file offset in the classic format limits the total sizes of all
1201 but the last non-record variables in a file to less than 2 GiB, with a
1202 similar limitation for the data within each record for record variables.
1203 For more information see [Classic Format Limitations](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/NetCDF-Classic-Format-Limitations.html#NetCDF-Classic-Format-Limitations).
1205 The netCDF classic format is also identified as *version 1* or *CDF1* in
1206 reference to the format label at the start of a file.
1208 With netCDF version 3.6 and later, a second variant of netCDF format is
1209 supported in addition to the classic format. The new variant is referred
1210 to as the *64-bit offset* format, *version 2*, or *CDF2*. The primary
1211 difference from the classic format is the use of 64-bit file offsets
1212 instead of 32-bit offsets, but it also supports larger variable and
1217 Do I have to know which netCDF file format variant is used in order to access or modify a netCDF file? {#Do-I-have-to-know-which-netCDF-file-format-variant-is-used-in-order-to-access-or-modify-a-netCDF-file}
1221 No, version 3.6 and later versions of the netCDF C/Fortran library
1222 detect which variant of the format is used for each file when it is
1223 opened for reading or writing, so it is not necessary to know which
1224 variant of the format is used. The version of the format will be
1225 preserved by the library on writing. If you want to modify a classic
1226 format file to use the 64-bit offset format so you can make it much
1227 larger, you will have to create a new file and copy the data to it. The
1228 **nccopy** utility available in version 4.1 can copy a classic file to a
1233 Will future versions of the netCDF library continue to support accessing files in the classic format? {#Will-future-versions-of-the-netCDF-library-continue-to-support-accessing-files-in-the-classic-format}
1237 Yes, the 3.6 library and all planned future versions of the library will
1238 continue to support reading and writing files using the classic (32-bit
1239 offset) format as well as the 64-bit offset format. There is no need to
1240 convert existing archives from the classic to the 64-bit offset format.
1241 Even netCDF-4, which introduces a third variant of the netCDF format
1242 based on HDF5, continues to support accessing classic format netCDF
1243 files as well as 64-bit offset netCDF files. NetCDF-4 HDF5 files have
1244 even fewer restrictions on size than 64-bit offset netCDF files.
1248 Should I start using the new 64-bit offset format for all my netCDF files? {#Should-I-start-using-the-new-64-bit-offset-format-for-all-my-netCDF-files}
1252 No, we discourage users from making use of the 64-bit offset format
1253 unless they need it for large files. It may be some time until
1254 third-party software that uses the netCDF library is upgraded to 3.6 or
1255 later versions that support the large file facilities, so we advise
1256 continuing to use the classic netCDF format for data that doesn't
1257 require file offsets larger than 32 bits. The library makes this
1258 recommendation easy to follow, since the default for file creation is
1263 How can I tell if a netCDF file uses the classic format or 64-bit offset format? {#How-can-I-tell-if-a-netCDF-file-uses-the-classic-format-or-64-bit-offset-format}
1267 The short answer is that under most circumstances, you should not care,
1268 if you use version 3.6.0 or later of the netCDF library. But the
1269 difference is indicated in the first four bytes of the file, which are
1270 'C', 'D', 'F', '\\001' for the classic netCDF format and 'C', 'D', 'F',
1271 '\\002' for the 64-bit offset format. On a Unix system, one way to
1272 display the first four bytes of a file, say foo.nc, is to run the
1276 od -An -c -N4 foo.nc
1291 depending on whether foo.nc is a classic or 64-bit offset netCDF file,
1294 With netCDF version 3.6.2 or later, there is an easier way, using the
1295 "-k" option to **ncdump** to determine the kind of file, for example:
1304 What happens if I create a 64-bit offset format netCDF file and try to open it with an older netCDF application that hasn't been linked with netCDF 3.6? {#What-happens-if-I-create-a-64-bit-offset-format-netCDF-file-and-try-to-open-it-with-an-older-netCDF-application-that-hasnt-been-linked-with-netCDF-36}
1308 The application will indicate an error trying to open the file and
1309 present an error message equivalent to "not a netCDF file". This is why
1310 it's a good idea not to create 64-bit offset netCDF files until you
1315 Can I create 64-bit offset files on 32-bit platforms? {#Can-I-create-64-bit-offset-files-on-32-bit-platforms}
1319 Yes, by specifying the appropriate file creation flag you can create
1320 64-bit offset netCDF files the same way on 32-bit platforms as on 64-bit
1321 platforms. You do not need to compile the C/Fortran libraries as 64-bit
1322 to support access to 64-bit offset netCDF files.
1326 How do I create a 64-bit offset netCDF file from C, Fortran-77, Fortran-90, or C++? {#How-do-I-create-a-64-bit-offset-netCDF-file-from-C-Fortran-77-Fortran-90-or-Cpp}
1330 With netCDF version 3.6.0 or later, use the NC\_64BIT\_OFFSET flag when
1331 you call nc\_create(), as in:
1334 err = nc_create("foo.nc",
1335 NC_NOCLOBBER | NC_64BIT_OFFSET,
1339 In Fortran-77, use the NF\_64BIT\_OFFSET flag when you call
1340 nf\_create(), as in:
1343 iret = nf_create('foo.nc',
1344 IOR(NF_NOCLOBBER,NF_64BIT_OFFSET),
1348 In Fortran-90, use the NF90\_64BIT\_OFFSET flag when you call
1349 nf90\_create(), as in:
1352 iret = nf90_create(path="foo.nc",
1353 cmode=or(nf90_noclobber,nf90_64bit_offset),
1357 In C++, use the Offset64Bits enum in the NcFile constructor, as in:
1361 FileMode=NcFile::New,
1362 FileFormat=NcFile::Offset64Bits);
1365 In Java, use the setLargeFile() method of the NetcdfFileWritable class.
1369 How do I create a 64-bit offset netCDF file using the ncgen utility? {#How-do-I-create-a-64-bit-offset-netCDF-file-using-the-ncgen-utility}
1373 A command-line option, '-k', specifies the kind of file format
1374 variant. By default or if '-k classic' is specified, the generated
1375 file will be in netCDF classic format. If '-k 64-bit-offset' is
1376 specified, the generated file will use the 64-bit offset format.
1380 Have all netCDF size limits been eliminated? {#Have-all-netCDF-size-limits-been-eliminated}
1384 The netCDF-4 HDF5-based format has no practical limits on the size of a
1387 However, for the classic and 64-bit offset formats there are still
1388 limits on sizes of netCDF objects. Each fixed-size variable (except the
1389 last, when there are no record variables) and the data for one record's
1390 worth of a single record variable (except the last) are limited in size
1391 to a little less that 4 GiB, which is twice the size limit in versions
1392 earlier than netCDF 3.6.
1394 The maximum number of records remains 2^32^-1.
1398 Why are variables still limited in size? {#Why-are-variables-still-limited-in-size}
1402 While most platforms support a 64-bit file offset, many platforms only
1403 support a 32-bit size for allocated memory blocks, array sizes, and
1404 memory pointers. In C developer's jargon, these platforms have a 64-bit
1405 `off_t` type for file offsets, but a 32-bit `size_t` type for size of
1406 arrays. Changing netCDF to assume a 64-bit `size_t` would restrict
1407 netCDF's use to 64-bit platforms.
1411 How can I write variables larger than 4 GiB? {#How-can-I-write-variables-larger-than-4-GiB}
1415 You can overcome the 4 GiB size barrier by using the netCDF-4 HDF5
1416 format for your data. The only change required to the program that
1417 writes the data is an extra flag to the file creation call, followed by
1418 recompiling and relinking to the netCDF-4 library. Programs that access
1419 the data would also need to be recompiled and relinked to the netCDF-4
1422 For classic and 64-bit offset netCDF formats, if you change the first
1423 dimension of a variable from a fixed size to an unlimited size instead,
1424 the variable can be much larger. Even though record variables are
1425 restricted to 4 Gib per record, there may be 4 billion records. NetCDF
1426 classic or 64-bit offset files can only have one unlimited dimension, so
1427 this won't work if you are already using a record dimension for other
1430 It is also possible to overcome the 4 GiB variable restriction for a
1431 single fixed size variable, when there are no record variables, by
1432 making it the last variable, as explained in the example in [NetCDF Classic Format Limitations](netcdf/NetCDF-Classic-Format-Limitations.html#NetCDF-Classic-Format-Limitations).
1436 Why do I get an error message when I try to create a file larger than 2 GiB with the new library? {#Why-do-I-get-an-error-message-when-I-try-to-create-a-file-larger-than-2-GiB-with-the-new-library}
1440 There are several possible reasons why creating a large file can fail
1441 that are not related to the netCDF library:
1443 - User quotas may prevent you from creating large files. On a Unix
1444 system, you can use the "ulimit" command to report limitations such
1445 as the file-size writing limit.
1447 - There is insufficient disk space for the file you are trying to
1450 - The file system in which you are writing may not be configured to
1451 allow large files. On a Unix system, you can test this with a
1455 dd if=/dev/zero bs=1000000 count=3000 of=./largefile
1460 which should write a 3 GByte file named "largefile" in the current
1461 directory, verify its size, and remove it.
1463 If you get the netCDF library error "One or more variable sizes violate
1464 format constraints", you are trying to define a variable larger than
1465 permitted for the file format variant. This error typically occurs when
1466 leaving "define mode" rather than when defining a variable. The error
1467 status cannot be returned when a variable is first defined, because the
1468 last fixed-size variable defined is permitted to be larger than other
1469 fixed-size variables (when there are no record variables).
1471 Similarly, the last record variable may be larger than other record
1472 variables. This means that subsequently adding a small variable to an
1473 existing file may be invalid, because it makes what was previously the
1474 last variable now in violation of the format size constraints. For
1475 details on the format size constraints, see the Users Guide sections
1476 [NetCDF Classic Format Limitations](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#Classic-Limitations) and [NetCDF 64-bit Offset Format Limitations](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#64-bit-Offset-Limitations).
1478 If you get the netCDF library error "Invalid dimension size" for a
1479 non-negative size, you are exceeding the size limit of netCDF
1480 dimensions, which must be less than 2,147,483,644 for classic files with
1481 no large file support and otherwise less than 4,294,967,292.
1485 Do I need to use special compiler flags to compile and link my applications that use netCDF with Large File Support? {#Do-I-need-to-use-special-compiler-flags-to-compile-and-link-my-applications-that-use-netCDF-with-Large-File-Support}
1489 No, except that 32-bit applications should link with a 32-bit version of
1490 the library and 64-bit applications should link with a 64-bit library,
1491 similarly to use of other libraries that can support either a 32-bit or
1492 64-bit model of computation. But note that a 32-bit version of the
1493 netCDF library fully supports writing and reading 64-bit offset netCDF
1498 Is it possible to create a "classic" format netCDF file with netCDF version 3.6.0 that cannot be accessed by applications compiled and linked against earlier versions of the library? {#isitpossibleclassic360}
1501 No, classic files created with the new library should be compatible with
1502 all older applications, both for reading and writing, with one minor
1503 exception. The exception is due to a correction of a netCDF bug that
1504 prevented creating records larger than 4 GiB in classic netCDF files
1505 with software linked against versions 3.5.1 and earlier. This limitation
1506 in total record size was not a limitation of the classic format, but an
1507 unnecessary restriction due to the use of too small a type in an
1508 internal data structure in the library.
1510 If you want to always make sure your classic netCDF files are readable
1511 by older applications, make sure you don't exceed 4 GiBytes for the
1512 total size of a record's worth of data. (All records are the same size,
1513 computed by adding the size for a record's worth of each record
1514 variable, with suitable padding to make sure each record begins on a
1515 byte boundary divisible by 4.)
1519 NetCDF and Other Software {#NetCDF-and-Other-Software}
1522 What other software is available for accessing, displaying, and manipulating netCDF data? {#What-other-software-is-available-for-accessing-displaying-and-manipulating-netCDF-data}
1526 Utilities available in the current netCDF distribution from Unidata are
1527 **ncdump**, for converting netCDF files to an ASCII human-readable form,
1528 and **ncgen** for converting from the ASCII human-readable form back to
1529 a binary netCDF file or a C or FORTRAN program for generating the netCDF
1530 file. [Software for Manipulating or Displaying NetCDF Data](software.html) provides a list of other software useful for access, visualization, and analysis of netCDF data and data represented in other forms. Another useful [guide to netCDF utilities](http://nomads.gfdl.noaa.gov/sandbox/products/vis/data/netcdf/GFDL_VG_NetCDF_Utils.html) is available from NOAA's Geophysical Fluid Dynamics Laboratory.
1534 What other data access interfaces and formats are available for scientific data? {#What-other-data-access-interfaces-and-formats-are-available-for-scientific-data}
1538 The [Scientific Data Format Information FAQ](http://www.cv.nrao.edu/fits/traffic/scidataformats/faq.html) provides a somewhat dated description of other access interfaces and formats for scientific data, including [CDF](http://nssdc.gsfc.nasa.gov/cdf/cdf_home.html) and [HDF](http://hdf.ncsa.uiuc.edu/). A brief comparison of CDF, netCDF, and HDF is available in the [CDF FAQ](http://nssdc.gsfc.nasa.gov/cdf/html/FAQ.html). Another comparison is in Jan Heijmans' [An Introduction to Distributed Visualization](http://www.xi-advies.nl/downloads/AnIntroductionToDistributedVisualization.pdf). John May's book [*Parallel I/O for High Performance Computing*](http://www.llnl.gov/CASC/news/johnmay/John_May_book.html) includes a chapter on Scientific Data Libraries that describes netCDF and HDF5, with example source code for reading and writing files using both interfaces.
1542 What is the connection between netCDF and CDF? {#What-is-the-connection-between-netCDF-and-CDF}
1546 [CDF](http://cdf.gsfc.nasa.gov/) was developed at the NASA Space Science
1547 Data Center at Goddard, and is freely available. It was originally a VMS
1548 FORTRAN interface for scientific data access. Unidata reimplemented the
1549 library from scratch to use [XDR](http://www.faqs.org/rfcs/rfc1832.html)
1550 for a machine-independent representation, designed the
1551 [CDL](netcdf/CDL-Syntax.htm) (network Common Data form Language) text
1552 representation for netCDF data, and added aggregate data access, a
1553 single-file implementation, named dimensions, and variable-specific
1556 NetCDF and CDF have evolved independently. CDF now supports many of the
1557 same features as netCDF (aggregate data access, XDR representation,
1558 single-file representation, variable-specific attributes), but some
1559 differences remain (netCDF doesn't support native-mode representation,
1560 CDF doesn't support named dimensions). There is no compatibility between
1561 data in CDF and netCDF form, but NASA makes available [some
1562 translators](http://cdf.gsfc.nasa.gov/html/dtws.html) between various
1563 scientific data formats. For a more detailed description of differences
1564 between CDF and netCDF, see the [CDF FAQ](http://cdf.gsfc.nasa.gov/html/FAQ.html).
1568 What is the connection between netCDF and HDF? {#What-is-the-connection-between-netCDF-and-HDF}
1572 The National Center for Supercomputing Applications (NCSA) originally
1573 developed [HDF4](http://hdf.ncsa.uiuc.edu/) and made it freely
1574 available. HDF4 is an extensible data format for self-describing files
1575 that was developed independently of netCDF. HDF4 supports both C and
1576 Fortran interfaces, and it has been successfully ported to a wide
1577 variety of machine architectures and operating systems. HDF4 emphasizes
1578 a single common format for data, on which many interfaces can be built.
1580 NCSA implemented software that provided a netCDF-2 interface to HDF4.
1581 With this software, it was possible to use the netCDF calling interface
1582 to place data into an HDF4 file.
1584 HDF5, developed and supported by The HDF Group, Inc., a non-profit
1585 spin-off from the NCSA group, provides a richer data model, with
1586 emphasis on efficiency of access, parallel I/O, and support for
1587 high-performance computing. The netCDF-4 project has implemented an
1588 enhanced netCDF interface on the HDF5 storage layer to preserve the
1589 desirable common characteristics of netCDF and HDF5 while taking
1590 advantage of their separate strengths: the widespread use and simplicity
1591 of netCDF and the generality and performance of HDF5.
1595 Has anyone implemented client-server access for netCDF data? {#Has-anyone-implemented-client-server-access-for-netCDF-data}
1599 Yes, as part of the [OPeNDAP](http://www.opendap.org/) framework,
1600 developers have implemented a client-server system for access to remote
1601 data that supports use of the netCDF interface for clients. A reference
1602 version of the software is available from the [OPeNDAP download site](http://www.opendap.org/download/index.html/). After linking your netCDF application with the OPeNDAP netCDF library, you can use URL's to access data from other sites running an OPeNDAP server. This supports accessing small subsets of large datasets remotely through the netCDF interfaces, without copying the datasets.
1604 The 4.1 release of netCDF will include OPeNDAP client support; an
1605 experimental version is available now in the snapshot distributions.
1607 Other clients and servers support access through a netCDF interface to
1608 netCDF and other kinds of data, including clients written using the
1609 [netCDF-Java library](http://www.unidata.ucar.edu/software/netcdf-java/) and servers that use the
1610 [THREDDS Data Server](/software/thredds/current/tds/TDS.html).
1612 The [GrADS Data Server](http://grads.iges.org/grads/gds/) provides
1613 subsetting and analysis services across the Internet for any
1614 GrADS-readable dataset, including suitable netCDF datasets. The latest
1615 version of the [PMEL Live Access Server](http://ferret.pmel.noaa.gov/LAS) uses THREDDS Data Server technology to provide flexible access to geo-referenced scientific data, including netCDF data.
1619 How do I convert between GRIB and netCDF? {#How-do-I-convert-between-GRIB-and-netCDF}
1623 Several programs and packages have been developed that convert between
1624 [GRIB](http://www.wmo.ch/web/www/DPS/grib-2.html) and netCDF data:
1625 [ncl_convert2nc](http://www.ncl.ucar.edu/Applications/grib2netCDF.shtml),
1626 [degrib](http://www.nws.noaa.gov/mdl/NDFD_GRIB2Decoder/),
1627 [CDAT](software.html#CDAT), [CDO](software.html#CDO),
1628 [GDAL](http://www.gdal.org/), [GrADS](software.html#GrADS), and
1629 [wgrib2](http://www.cpc.noaa.gov/products/wesley/wgrib2/).
1631 The Unidata [netCDF Java Library](http://www.unidata.ucar.edu/software/netcdf-java/index.html) can
1632 read GRIB1 and GRIB2 data (and many other data formats) through a netCDF
1633 interface. As a command-line example, you could convert *fileIn.grib* to
1634 *fileOut.nc* as follows:
1637 java -Xmx1g -classpath netcdfAll-4.3.jar ucar.nc2.dataset.NetcdfDataset \
1638 -in fileIn.grib -out fileOut.nc [-isLargeFile] [-netcdf4]
1641 For more details on using netCDF Java, see the CDM man pages for
1642 [nccopy](http://www.unidata.ucar.edu/software/netcdf-java/reference/manPages.html#nccopy).
1649 Can I recover data from a netCDF file that was not closed properly? {#Can-I-recover-data-from-a-netCDF-file-that-was-not-closed-properly}
1653 _I have some netcdf files which have data in them and were apparently
1654 not properly closed. When I examine them using **ncdump** they report zero
1655 data points, although the size is a few megabytes. Is there a way of
1658 If the files are in classic format or 64-bit offset format (if they were
1659 created by netCDF version 3.6.3 or earlier, for example), then you can
1660 use an editor that allows you to change binary files, such as emacs, to
1661 correct the four-byte number of records field in the file. This is a
1662 bigendian 4 byte integer that begins at the 4th byte in the file.
1664 This is what the first eight bytes would look like for classic format if
1665 you had zero records, where printable characters are specified as
1666 US-ASCII characters within single-quotes and non-printable bytes are
1667 denoted using a hexadecimal number with the notation '\\xDD', where each
1668 D is a hexadecimal digit:
1671 'C' 'D' 'F' \x01 \x00 \x00 \x00 \x00
1677 'C' 'D' 'F' \x02 \x00 \x00 \x00 \x00
1680 for 64-bit-offset format.
1682 And this is what the first eight bytes should look like for classic
1683 format if you had 500 records (500 is 01F4 in hexadecimal)
1686 'C' 'D' 'F' \x01 \x00 \x01 \x0f \x04
1692 'C' 'D' 'F' \x02 \x00 \x01 \x0f \x04
1695 for 64-bit-offset format.
1697 So if you can compute how many records should be in the file, you can
1698 edit the second four bytes to fix this. You can find out how many
1699 records should be in the file from the size of the file and from the
1700 variable types and their shapes. See the [description of the netCDF format](http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#File-Format)
1701 for classic and 64-bit offset files for how to figure out how large the
1702 file should be for fixed sized variables of particular shapes and for a
1703 specified number of record variables of particular shapes.
1705 Note that if you neglected to call the appropriate netCDF close function
1706 on a file, data in the last record written but not flushed to the disk
1707 may also be lost, but correcting the record count should allow recovery
1708 of the other records.
1712 Is there a list of reported problems and workarounds? {#Is-there-a-list-of-reported-problems-and-workarounds}
1716 Yes, the document [Known problems with the netCDF Distribution](known_problems.html) describes reported problems and workarounds in the latest version and some earlier releases.
1720 How do I make a bug report? {#How-do-I-make-a-bug-report}
1724 If you find a bug, send a description to
1725 support-netcdf@unidata.ucar.edu. This is also the address to use for
1726 questions or discussions about netCDF that are not appropriate for the
1727 entire netcdfgroup mailing list.
1731 How do I search through past problem reports? {#How-do-I-search-through-past-problem-reports}
1735 A search link is available at the bottom of the [netCDF homepage](http://www.unidata.ucar.edu/software/netcdf/), providing a full-text search of the
1736 support questions and answers about netCDF provided by Unidata support
1741 Programming with NetCDF {#Programming-with-NetCDF}
1744 Which programming languages have netCDF interfaces? {#Which-programming-languages-have-netCDF-interfaces}
1747 The netCDF distribution comes with interfaces for C, Fortran77,
1748 Fortran90, and C++. Other languages for which interfaces are available
1751 - [Ada](http://freshmeat.net/projects/adanetcdf/)
1752 - [IDL](software.html#IDL)
1753 - [Java](software.html#Java%20interface)
1754 - [MATLAB](software.html#MATLAB)
1755 - [Perl](software.html#Perl)
1756 - [Python](software.html#Python)
1757 - [R](software.html#R)
1758 - [Ruby](software.html#Ruby)
1759 - [Tcl/Tk](software.html#Tcl/Tk)
1763 Are the netCDF libraries thread-safe? {#Are-the-netCDF-libraries-thread-safe}
1766 The C-based libraries are *not* thread-safe. C-based libraries are those
1767 that depend on the C library, which currently include all language
1768 interfaces except for the Java interface. The Java interface is
1769 thread-safe when a few simple rules are followed, such as each thread
1770 getting their handle to a file.
1774 How does the C++ interface differ from the C interface? {#How-does-the-Cpp-interface-differ-from-the-C-interface}
1777 It provides all the functionality of the C interface (except for the
1778 generalized mapped access of ncvarputg() and ncvargetg()) and is
1779 somewhat simpler to use than the C interface. With the C++ interface, no
1780 IDs are needed for netCDF components, there is no need to specify types
1781 when creating attributes, and less indirection is required for dealing
1782 with dimensions. However, the C++ interface is less mature and
1783 less-widely used than the C interface, and the documentation for the C++
1784 interface is less extensive, assuming a familiarity with the netCDF data
1785 model and the C interface. Recently development of the C++ interface has
1786 languished as resources have been redirected to enhancing the Java
1791 How does the Fortran interface differ from the C interface? {#How-does-the-Fortran-interface-differ-from-the-C-interface}
1794 It provides all the functionality of the C interface. The Fortran
1795 interface uses Fortran conventions for array indices, subscript order,
1796 and strings. There is no difference in the on-disk format for data
1797 written from the different language interfaces. Data written by a C
1798 language program may be read from a Fortran program and vice-versa. The
1799 Fortran-90 interface is much smaller than the FORTRAN 77 interface as a
1800 result of using optional arguments and overloaded functions wherever
1805 How do the Java, Perl, Python, Ruby, ... interfaces differ from the C interface? {#How-do-the-Java-Perl-Python-Ruby-interfaces-differ-from-the-C-interface}
1808 They provide all the functionality of the C interface, using appropriate
1809 language conventions. There is no difference in the on-disk format for
1810 data written from the different language interfaces. Data written by a C
1811 language program may be read from programs that use other language
1812 interfaces, and vice-versa.
1816 How do I handle errors in C? {#How-do-I-handle-errors-in-C}
1819 For clarity, the NetCDF C Interface Guide contains examples which use a
1820 function called handle\_err() to handle potential errors like this:
1823 status = nc_create("foo.nc", NC_NOCLOBBER, &ncid);
1824 if (status != NC_NOERR) handle_error(status);
1827 Most developers use some sort of macro to invoke netCDF functions and
1828 test the status returned in the calling context without a function call,
1829 but using such a macro in the User's Guides arguably makes the examples
1830 needlessly complex. For example, some really excellent developers define
1831 an "ERR" macro and write code like this:
1834 if (nc_create(testfile, NC_CLOBBER, &ncid)) ERR;
1837 where Err is defined in a header file:
1840 /* This macro prints an error message with line number and name of
1843 fflush(stdout); /* Make sure our stdout is synced with stderr. */ \
1845 fprintf(stderr, "Sorry! Unexpected result, %s, line: %d\n", \
1846 __FILE__, __LINE__); \
1850 Ultimately, error handling depends on the application which is calling
1851 netCDF functions. However we strongly suggest that some form of error
1852 checking be used for all netCDF function calls.
1858 ==============================================
1860 Below are a list of commonly-asked questions regarding NetCDF and CMake.
1862 How can I see the options available to CMake? {#listoptions}
1863 ---------------------------------------------
1865 $ cmake [path to source tree] -L - This will show the basic options.
1866 $ cmake [path to source tree] -LA - This will show the basic and advanced options.
1869 How do I specify how to build a shared or static library? {#sharedstatic}
1870 --------------------------------------------------------
1872 This is controlled with the internal `cmake` option, `BUILD_SHARED_LIBS`.
1874 $ cmake [Source Directory] -DBUILD_SHARED_LIBS=[ON/OFF]
1877 Can I build both shared and static libraries at the same time with cmake? {#sharedstaticboth}
1878 -------------------------------------------------------------------------
1880 Not at this time; it is required to instead build first one version, and then the other, if you need both.
1882 How can I specify linking against a particular library? {#partlib}
1883 -------------------------------------------------------
1885 It depends on the library. To specify a custom `ZLib`, for example, you would do the following:
1887 $ cmake [Source Directory] -DZLIB_LIBRARY=/path/to/my/zlib.lib
1890 `HDF5` is more complex, since it requires both the `hdf5` and `hdf5_hl` libraries. You would specify custom `HDF5` libraries as follows:
1892 $ cmake [Source Directory] -DHDF5_LIB=/path/to/hdf5.lib \
1893 -DHDF5_HL_LIB=/path/to/hdf5_hl.lib \
1894 -DHDF5_INCLUDE_DIR=/path/to/hdf5/include
1897 Alternatively, you may specify:
1899 $ cmake [Source Directory] \
1900 -DHDF5_LIBRARIES="/path/to/hdf5.lib;/path/to/hdf5_hl.lib" \
1901 -DHDF5_INCLUDE_DIRS=/path/to/hdf5/include/
1904 What if I want to link against multiple libraries in a non-standard location {#nonstdloc}
1905 ----------------------------------------------------------------------------
1907 You can specify the path to search when looking for dependencies and header files using the `CMAKE_PREFIX_PATH` variable:
1911 $ cmake [Source Directory] -DCMAKE_PREFIX_PATH=c:\shared\libs\
1916 $ cmake [Source Directory] -DCMAKE_PREFIX_PATH=/usr/custom_library_locations/
1918 How can I specify a Parallel Build using HDF5 {#parallelhdf}
1919 ----------------------------------------------
1921 If cmake is having problems finding the parallel `HDF5` install, you can specify the location manually:
1924 $ cmake [Source Directory] -DENABLE_PARALLEL=ON \
1925 -DHDF5_LIB=/usr/lib64/openmpi/lib/libhdf5.so \
1926 -DHDF5_HL_LIB=/usr/lib64/openmpi/lib/libhdf5.hl.so \
1927 -DHDF5_INCLUDE_DIR=/usr/include/openmpi-x86_64 \
1929 You will, of course, need to use the location of the libraries specific to your development environment.
1936 What other future work on netCDF is planned? {#What-other-future-work-on-netCDF-is-planned}
1939 Issues, bugs, and plans for netCDF are maintained in the Unidata issue
1941 [netCDF-C](https://www.unidata.ucar.edu/jira/browse/NCF), [Common Data Model / NetCDF-Java](https://www.unidata.ucar.edu/jira/browse/CDM),
1942 [netCDF-Fortran](https://www.unidata.ucar.edu/jira/browse/NCFORTRAN),
1943 and [netCDF-CXX4](https://www.unidata.ucar.edu/jira/browse/NCXXF), and
1945 (deprecated)](https://www.unidata.ucar.edu/jira/browse/NCCPP).