Automatic mapping and innovative on-demand mapping services at IGN France.
Modern cartography has been evolving quickly thanks to new tools and devices. Mapmaking has become widespread through the Internet. The growing possibility of creating maps on the Internet led to the development of a wide range of products with different cartographic quality (Cartwright 2007; Harrie and Stigmar 2009). Intuitive on-demand mapping tools and online services are needed to improve and standardize their quality (Christophe 2011; Harrie, Mustiere, and Stigmar 2011).
National mapping agencies are now faced with the need of publishing maps through geoportals, ideally up-to-date, at low cost, and with high cartographic quality (Foerster, Stoter, and Kraak 2010; Duchene et al. 2014). This is in line with the INSPIRE Directive in Europe (INSPIRE 2007) and the global tendency of disseminating public data.
In this article, we present work carried out at IGN France, through the 'on-demand mapping' project, to provide data and services for high-quality, customizable maps. The project addresses three topics:
(1) Automatic production of high-quality and up-to-date cartographic vector data from reference geodata;
(2) Generation of raster maps from the vector data for distribution through Web services or downloadable tiles;
(3) Offering high-level and innovative customization applications on these products, available through Web services.
After presenting a global overview of the process in the next section, we explain how the digital cartographic model (DCM) was produced. Then, we focus on the map legend and how it is customizable. Finally, before concluding, we present a prototype for offering a customization service.
A DCM is a database ready for displaying and plotting data at specified map scales (Griinreich 1985; Brassel and Weibel 1988). A DCM is usually derived from a digital landscape model (DLM), which is a database accurately describing topographic features without considering map scale or graphic constraints. DCM is produced to be readable at specified map scales. Figure 1 illustrates the process for building a multiscale DCM from existing data at IGN France. Multiscale means here that DCMs at different scales are stored in a common system and that data are harmonized as much as possible, in terms of schemas and graphic choices. It should be noted, however, that we did not consider explicitly linking objects at different scales, even if this could be useful for managing updates or detect inconsistencies, for example.
Various data sources were derived to build the multiscale DCM, as detailed in the following section. Data were stored in a PostgreSQL server with a PostGIS spatial extension. Note that a (nearly) complete automation is required to build the multiscale DCM. Automation reduces the cost of producing maps. Automation is also necessary to reduce time to produce maps and then ensure up-to-date maps (at least maps as up-to-date as the data used to produce them). In order to obtain such automation, it is necessary to take advantage of existing data and processes.
We follow a pragmatic approach in our work. First, we used different original data sources to build the multiscale DCM, either existing DLMs or DCMs. Then, we applied all together 'star' and 'ladder' approaches as defined by Eurogeographics (2005), that is, respectively deriving directly different levels of the DCM from the same original data-set by means of independent processes, and recursively and progressively deriving levels of the DCM from scale to scale. This mixed strategy is the one currently under development or in use in most national mapping agencies, as assessed by surveys of Stoter (2005) and Duchene et al. (2014).
Once the multiscale vector DCM has been built, it can be symbolized, rasterized, and tiled for distribution through Web services or as downloadable files (for more detailed description, see below). At this step, a good organization of legend parameters facilitates the customization of color choices while keeping continuity of colors through scales. Some default legends have thus been defined. More, this paves the way to on-demand customization of legends, in terms of the data to be represented and color choices. A prototype is presented later in the paper to help users to define their own original legends.
3. Producing the multiscale vector DCM
3.1. Small scales, from 1:100,000 to 1:1,000,000
Cartographic data from IGN's mapping department are used as part of our DCM. The data are published at scales of 1:100,000, 1:250,000, and 1:1,000,000 and are updated every year or every 2 years to produce both paper and digital maps (Figure 2).
The maps have been originally produced and are updated using production flow lines (Jahard, Lemarie, and Lecordix 2003) partly based on automatic generalization methods developed by the COGIT Lab and partners (Barrault et al. 2001; Duchene 2010). Those maps have initially been designed as paper maps. The purpose of the 'on-demand mapping' project presented here is to use those existing maps in a consistent multiscale DCM.
The data we use were thus only converted from proprietary format like 1 Spatial (http://www.lspatial.com/) Lamps2 or GeoConcept (http://www.geoconcept.com) to a central PostgreSQL database using conversion tools already widely used on other production flow lines.
3.2. Scale 1:25,000
Currently, IGN does not have a full vector cartographic database at 1:25,000 scale. Maps of part of the country are still stored as raster files. A countrywide vector database at the scale of 1:25,000 should be available at the end of 2017.
Furthermore, there is urgent need for a quality map at this scale, covering all of France, showing fewer features than the existing topographic maps but with more frequent updates (an uncluttered map allows for overlaying other data with a more readable result).
We implemented a process to build this needed cartographic database at the 1:25,000 scale and make available a countrywide vector map.
The data source for the map at this scale is the production version of the IGN BDTOPO[R] database. An updated version of this database is released every 6 months. The production of 1:25,000-scale cartographic data from BDTOPO[R] was automated to obtain a ready-to-display map with good cartographic quality. We used existing flow lines from a previous R&D project implemented to automate the production of 1:25,000 topographic maps at IGN (Braun et al. 2007; Maugeais et al. 2011).
The 1:25,000 cartographic database was produced in five phases:
In the source database, vegetation is too detailed and buildings are split between cadastral units. The data were cleaned during preprocessing which involved a simple but CPU-intensive task performed by a dedicated java process, using the open-source Java library JTS (Davis 2013). The process loads data the after tile, processes it in-memory, and uploads the results in new PostgreSQL tables. This process takes about 24 hours to handle the whole French territory.
(2) Main process
Several tasks were performed automatically, as listed below and illustrated in the Figures 3-5. These included: Symbolization:
(1) Defining symbolization relevant to the 1:25,000 scale, by combining different attribute values. For example, the choice of symbols of roads depends on six attributes: width, type (track, road, cycle path ...), state (current or foreseen), importance (major, local ...), level (bridge, tunnel ...), and accessibility (open, forbidden ...);
(2) Symbol orientation. For instance, the orientation of church crosses was derived from the orientation of churches computed from its shape;
(3) Matching boundaries (like administrative boundaries) and itineraries (like hiking paths) with roads and hydrographic networks. Matching is performed on the comparison of geometric and topologic configurations (Mustiere and Devogele 2008). It was done to offset boundaries and itineraries efficiently (Figure 3). Offsetting depends on the width of cartographic symbols and is done to preserve a good continuity and legibility of symbols representing boundaries and itineraries;
(1) Building generalization, based on work of the AGENT project (Barrault et al. 2001). Buildings may be simplified, eliminated, or displaced according to graphic constraints (see Figure 4). This requires the definition of meso objects such as city blocks determined from buildings (those objects are also useful for label placement);
(2) Elimination of redundant features: in dense areas, symbols are filtered to ensure legibility;
(3) Filtering of complex shapes such as forested areas.
(1) Matching points of interests (POIs) with corresponding features they describe. For example, in the reference database, the name of a lake may be an attribute of a point feature, while the shape of the lake itself is described by a surface feature, and those two features are not explicitly linked. However, this link is necessary for an efficient label placement, depending on the shape of the lake and the size of the lettering. This matching is done through spatial joins;
(2) Name formatting following precise cartographic standards: capitalization, abbreviation ...;
(3) Label placement, based on Barrault's (1998) work. Labels were associated with points (POI, names of populated areas) and linear features (rivers, roads ...). The algorithm tends to optimize the placement according to graphic constraints. For example, labels should not overlap each other; they should be efficiently placed and should minimize overlapping other symbols such as buildings.
This whole process is not only a challenge in terms of efficiency of algorithms to produce good cartographic outputs. It is also a technical challenge to launch such heavy computations on the whole French territory.
The process first extracts data from the DLM source stored in a PostgreSQL/PostGIS database and loads it into 1 Spatial Clarity software. Most processes are then run using this software and the open-source Java library JTS. Label placement is performed using an IGN software that takes into account the position of neighboring labels and other objects on the map to place labels accurately (Barrault 1998).
Computation was performed on 20 by 20 [km.sup.2] areas. In urban areas such as Paris, Lyon, Marseille, and Lille, it was done on 10 by 10 [km.sup.2]: a total of 1567 tiles were used for computation. French overseas territories were not covered, but adjustments of the process are being planned to enable mapping in those regions.
Mapping France is load-balanced in 30 parallel processes on three computers and takes about 5 days to complete. Parallelization is necessary as the process would otherwise take 124 days to complete (see Table 1). The software used for load balancing was developed by an IGN team for production management and distributed computing.
Space is split into square tiles to deal with heavy computation aspects. However, this may affect cartographic consistency at the the borders. Two methods are available to deal with this problem. First, for some operations, we force objects to remain inside the tiles. For example, when buildings are displaced, they are forced to remain in their original tile. Second, long objects are split into parts when crossing several tiles according to these areas. This sometimes leads to unsatisfactory results, like splitting administrative boundaries (see Figure 6). However, this is not a big issue because long features are subject to few operations, and roads in particular are not generalized.
In addition to the heavy cartographic process described above, the target database is cleaned using SQL queries to delete duplicated features located between two contiguous 20 [km.sup.2] computation tiles.
Exhaustive visual control of the whole map cannot be achieved in a short time range. Guided controls have been implemented to ensure the reliability of the result. Comparisons between DLM and DCM, including number of features and sum of lengths and surfaces, are performed to ensure that no object is lost. Moreover, all attribute combinations and every kind of symbolization are checked to validate the cartographic results. The open-source software OpenJUMP (http://www.openjump.org/) is used to perform these tasks.
(5) Interactive corrections
In addition to this fully automated step, label placement was checked. We found that approximately 8% of place names could not be positioned automatically by the process. It took around 200 hours of interactive editing (for the whole of France) to locate most of the missing place names (all of them, except the less important ones). Currently, each new edition of the map is checked and missing place names are edited again. This is time-consuming and a reason for moving toward more automation. Studies are underway to improve the process by taking advantage of interactive work done for previous editions. However, the issue is not so straightforward. Interactive editing is still necessary as label data in the database are currently being intensively updated.
Compared to the current SCAN 25[R] product,1 the new 1:25,000 scale map--named SCAN Express 25--is lighter (Figure 7). It is more suitable for on-screen use and the overlay of business data. SCAN Express 25 is rendered consistently over the whole French territory. Importantly, the new map is up-to-date because it was computed from the last version of the DLM. However, SCAN 25[R] is more accurate, showing such details as land use and relief features. Comprehensive controls make also the final quality better. IGN France has decided to offer the two digital maps--SCAN25[R] and SCAN Express 25--as a package to professional users (Figure 8).
3.3. Large-scale maps
For map generation at scales larger than 1:25,000, we used both raw data from the BDTOPO[R] and DCM data mapped at 1:25,000 scale. We thus took advantage of the cartographic work that has already been done and added more detail from the raw data. For instance, because buildings were generalized for the 1:25,000 scale map, we used raw data for buildings for larger scales. However, roads were not generalized, but a symbolization attribute was computed during the 1:25,000 process. In this case, it was pertinent to keep the classification for larger scale mapping in order to obtain consistent representation among scales.
The generation of maps at large scales is performed by the GeoServer rendering engine. This includes placing street names and addresses that are not present in the DCM at smaller scales. At larger scales, there is no need to perform intermediate DCM computation of all themes (Figure 9).
4. Map legend and rasterization
4.1. Legend construction
Rendering is based on GeoServer capabilities. This software uses the OGC profile for Web Map Service (WMS) Styled Layer Descriptor (SLD) to describe object rendering (Open Geospatial Consortium 2007). Graphic characteristics are described in XML files, using the OpenGIS[R] Symbology Encoding language (SE) (Open Geospatial Consortium 2006).
The first necessary step is to develop SLD files that produce a map that looks as close as possible to existing maps produced at IGN at all scales.
While creating map legends, we focused mainly on color choices because color is the visual variable that has the most significant impact. Color is a parameter that can be modified without having to rerun other mapping processes such as generalization. Thus, merely modifying the color allows for an almost instant rendering.
Colors used in classical maps produced by IGN could not be directly reused for our multiscale map, as they have some limitations for on-screen multiscale use.
The legend of the topographic map at a 1:25,000 scale makes use of vibrant colors and black, which makes it difficult to overlay user data. We therefore created a new map legend with lighter colors which were carefully picked to enable printing as well. This legend is named hereafter the 'standard legend'.
Maps at other scales are produced by different production flow lines with different specifications that have evolved over time, leading to heterogeneous legends through scales (Figure 10). We applied the same legend on cartographic data at every scale, thus creating a smoother zoom-in and zoom-out effect for an on-screen multiscale use. All the scales with harmonized legends in lighter colors are now made available by IGN France for use on French Geoportail API[R] (http://api.ign.ff/accueil) as a new WMS layer named 'SCAN Express standard' (Figure 11).
4.2. Paving the way for legend customization
In order to allow map legend customization, we used the variable substitution capability of GeoServer (Figure 12). It allows users to pass key-value pairs among parameters of a WMS request. The custom parameters are handled by GeoServer (GeoServer 2012) and their values are placed in SLD files.
This requires to recraft the SLD files for every concerned layer styles. However, once this operation is performed, there is no need to create new SLD files for each new map legend.
We settled on about 40 variables for color customization and 10 extra variables for other customization possibilities such as ability to add a UTM grid or to display (or not) some of the features that were not properly checked. Most of the extra variables we selected are usually used experimentally. The 40 variables for color customization were used as pivot legends through scales; modifying one of the variables (e.g., the color of watercourses) directly impacted colors at all scales.
We used the variable substitution capability to produce a second legend based on the colors of IGN's classical topographic maps at 1:25,000 scale. This 'classical legend' is available at French Geoportail API[R] (http:// api.ign.fr/accueil) as a new WMS layer 'SCAN Express classical'. Alternative legends have been tested too (Figure 13).
Color customization with variable substitution affects line and area features as well as some labels.
Point features have symbols stored in SVG (Scalable Vector Graphics) files. Two sets of SVG files have been produced. One with colors for the 'standard' map legend and the other one with gray scale symbols. At the moment we are still working on a satisfying solution for replacing colors in both SVG and SLD files.
4.3. Rasterization with different legends
For personalized map legends, rasterization is performed on the fly, by passing the custom colors and other parameters through the WMS request. Images are not stored.
Rasterization on the fly is very costly in terms of memory and computation time. So both 'standard' and 'classical' legends are prerasterized at different resolutions ranging from 1 to 2048 meters per pixel.
The tool used for rasterization is a WMS harvesting tool developed at IGN. This tool sends WMS requests per tile--whose size depends on the map scale--and covers the whole territory. Requests are load-balanced on several GeoServer and resulting images may be merged in a single BigTIFF image per scale whose size can go up to several tenths of Gigabytes.
Once published as WMS, the prerasterized tiles offer much better performance. Those tiles are also the final cartographic products that our process can deliver.
In addition, prerasterization ensures that the display is consistent, especially name positions, when moving the map on the screen, for instance using an application programming interface (API) such as OpenLayers.
5. Customization tools
Thanks to the architecture presented before, we have a multiscale vector DCM whose rendering offers some parameterization facilities. We describe in this section services developed on this architecture for an assisted customization of the content and data rendering.
5.1. Theme choice
In order to offer the possibility to end users to select which data to represent on their maps, we grouped layers of geographic objects into several themes that can be displayed separately or combined together. This produces flexibility in terms of data content but avoids introducing too much complexity--as if content customization was offered for each single feature class. A dozen themes have been identified (Figure 14):
(1) Cartographic background;
(2) Land use (vegetation coverage);
(3) Populated places (urban areas);
(4) Constructions (buildings, facilities);
(5) Orography (contour lines, height control points);
(6) Hydrography (rivers, canals, lakes, and other open water areas);
(7) Administrative boundaries, restricted areas, and zoning;
(8) Road network;
(9) Tourist information, POI;
(11) Railroad and energy networks;
(12) Layout (grids).
Special care was taken to make themes consistent at all scales on road and topographic maps.
Note that in our architecture we offer the opportunity to overlay layers from a DCM; we thus ensure better readability than overlaying layers from a DLM.
Colorado is a software prototype that helps users make color choices that suit their needs and produces a readable and understandable map. The Colorado engine is a refactoring of the COLLEG software prototyped by Christophe (2011). Interfaces are Web-based to make Colorado available as a service for any IGN user.
In order to guide the user through the design of her/his personalized and original map legend, Colorado first asks some questions about the user's desired map: what scale, what kind of landscape, and what features she/he wishes to personalize. The landscapes are combinations of urban and rural landscapes, mountains, plains, or coastal areas. The features proposed for color personalization will be the themes mentioned above.
Colorado then gives two different sources of inspiration for users to pick their favorite colors from:
(1) Color palettes. The user makes color choices from noncartographic sources. The requirement is that every color is picked from a single predefined harmonious palette;
(2) Map samples. The user chooses colors in a cartographic context and can see how the color will look like when applied on geographic features (Figures 15 and 16).
Once color choices are submitted, the Colorado engine computes every possible map legend from them, rates those legends, and makes a preselection of several legends based on the following:
(1) Color contrast and brightness;
(2) Predominant color pattern. For instance, the color of the sea matters in a seascape, the colors of buildings matter in an urban landscape, and so on;
(3) Cartographic conventions. This is an optional criterion. Three conventions are applied if the user wants to use them: sea color should be a shade of blue, vegetation colors should be shades of green, background color should be pale.
A new interface enables the user to browse preselected computed map legends and users make their final choices. It is possible to pan and zoom on a map with a selected legend to appreciate its rendering on a wider area than the initial landscape choice.
The result is an xml configuration file listing legend items and their associated colors and also the URL of a WMS.
5.3. Colors and map samples database
One of the strategies used by Colorado to help the user choose a map legend is based on map samples.
We built a database and Web interfaces to store map samples and their associated legends to test how Colorado can be used with different sources of inspiration.
The interfaces are made to facilitate the addition of new color palettes and new map samples based on those palettes. Palettes are extracted from publications on harmonious color use (Sawahata 2001) and from pictures from some famous painters.
This is a practical application of variable substitutions allowed by GeoServer that were previously implemented in SLD files (see above).
Color palettes and more than 500 map samples stored in this database are used as sources of inspiration in Colorado (Figure 17).
6. Conclusion and outlook
The multiscale and customizable DCM presented in this article has been made possible thanks to years of research work on map generalization and map customization, as well as development for tuning and improving and strengthening tools for large and real data and needs.
Since February 2013, the cartographic pyramid has been available to the public as layers of the French Geoportail[R] WMS and Web Map Tile Service. Users of the API can now use the standard and classical legends at redaction levels ranging from 1:4,000 to 1:10,000,000.
The tiled 1:25,000 scale map has been distributed since mid-2012 as a downloadable product complementary to SCAN 25[R]. This product is meant for use with GIS software as a background map and offers sufficiently high resolutions for printing at a quality that cannot be achieved with available WMS.
A new edition of the cartographic pyramid is foreseen every 6 months. Each time, the whole process will be rerun for 1:25,000 and larger scales. As the process is mostly automated, this seems to be the easiest way to do it. For smaller scales, updates will be made depending on how frequently the input cartographic databases are updated. Currently, the update period is 1 or 2 years for different scales.
Work is in progress to improve customization tools and develop new Web services. There is a service on IGN's website that enables customers to define a personalized paper map with the following options (http:// loisirs.ign.fr/carte-a-la-carte.html): personalized title, cover illustration and color, three different kinds of maps, a dozen different scales, customized extent, three sizes, two paper orientations, and normal or tear-resistant paper. This service will be redesigned with new options that take advantage of progress made on personalization tools and production flow line flexibility. The production flow line of the printed map will need to be adapted.
Duchene et al. (2014) identify four main future challenges and research directions around the derivation of multiscale data in national mapping agencies. The first is about increasing the effectiveness of data derivation. In our process, label placement remains the most costly task because it involves interactive editing. This issue should be tackled, either by improving the quality of the automatic placement or by better managing updates. The second challenge concerns the integration of heterogeneous data. Undoubtedly, this will be a challenge for our process in the future, when more and more heterogeneous data from partners will be used as input data for the cartographic process. The third challenge is about on-demand derivation of cartographic products. With the Colorado Web service prototype, our work paves the way for the customization of maps by users. The last challenge is to include user data in our cartographic displays. This is for sure the next step of map customization. Now, we offer the user the possibility to overlay own data to our maps. A key issue is to determine whether simple data overlay is sufficient and whether a clever integration of user data would be useful. An example of such integration is the boundary offsetting performed for our maps and described before.
Barrault, M. 1998. "Le Placement Cartographique Des Ecritures: Resolution D'un Probleme A Forte Combinatoire Et Presentant Un Grand Nombre De Contraintes Variees." PhD thesis, University of Mame-la-Vallee.
Barrault, M., N. Regnauld, C. Duchene, K. Flaire, C. Baeijs, Y. Demazeau, P. Hardy, W. Mackaness, A. Ruas, and R. Weibel. 2001. "Integrating Multi-agent, Object-oriented, and Algorithmic Techniques for Improved Automated Map Generalization." Proceedings of the 20th International Cartographic Conference, vol. 3, edited by International Cartographic Association, Beijing, August 6-10, 2110-2116.
Brassel, K. E., and R. Weibel. 1988. "A Review and Conceptual Framework of Automated Map Generalization." International Journal of Geographical Information Systems 2 (3): 229-244. doi: 10.1080/02693798808927898.
Braun, A., X. Halbecq, F. Lecordix, J.-M. Le Gallic, and F. Prigent. 2007 "A New Flowline for the French Topographic Maps in IGN." Proceedings of the 23th International Cartographic Conference (ICC), Moscow, August 2007.
Cartwright, W. 2007. "Addressing the Value of Art in Cartographic Communication." Proceedings of ISPRS, ICA and DGFK Joint Workshop on Visualization and Exploration of Geospatial Data, edited by International Society for Photogrammetry and Remote Sensing, Stuttgart, June 27-29.
Christophe, S. 2011. "Creative Colours Specification Based on Knowledge (Colorlegend System)." The Cartographic Journal 48 (2): 138-145. doi: 10.1179/1743277411Y.0000000012.
Davis, M. 2013. "JTS Topology Suite." Accessed April 8, 2013. http://tsusiatsoftware.net/jts/main.html
Duchene, C. 2010. "Current State of Research in Generalisation at COGIT Laboratory, IGN France." Generalisation and Data Integration Symposium, Boulder, CO, June 20-22.
Duchene, C., B. Baella, C. Brewer, D. Burghardt, B. Buttenfield, J. Gaffuri, D. Kauferle, et al. 2014. "Generalisation in Practice within Governmental Mapping Agencies." In Abstracting Geographic Information in a Data Rich World: Methodologies and Applications of Map Generalization, edited by W. Mackaness, D. Burghardt, and C. Duchene, 329-391. Berlin: Springer.
Eurogeographics. 2005. Generalisation Processes: A Benchmark Study of the Expert Group on Quality, Eurogeographics Internal Report 2005.
Foerster, T., J. Stoter, and M. Kraak. 2010. "Challenges for Automated Generalisation at European Mapping Agencies: A Qualitative and Quantitative Analysis." The Cartographic Journal Al (1): 41-54. doi: 10.1179/000870409X1252 5737905123.
GeoServer. 2012. "GeoServer Variable Substitution, GeoServer 2.3.X User Manual." Accessed April 8, 2013. http://docs. geoserver.org/stable/en/user/styling/sld-extensions/substitution.html
Grunreich, D. 1985. Computer Assisted Generalisation: Papers CERCO-Cartography Course, Frankfurt am Main: Institut fur Angewandte Geodasie.
Harrie, L., S. Mustiere, and H. Stigmar. 2011. "Cartographic Quality Issues for View Services in Geoportals." Cartographica, special issue on Internet Mapping: Selected Papers from the 25th Conference of the International Cartographic Association, vol. 46, n.2/2011, edited by Canadian Cartographic Association, Paris, July 3-8, 92-100.
Harrie, L., and H. Stigmar. 2009. "An Evaluation of Measures for Quantifying Map Information." ISPRS Journal of Photogrammetry and Remote Sensing, doi: 10.1016/j. isprsjprs.2009.05.004.
INSPIRE. 2007. Directive 2007/2/EC of the European Parliament and of the Council of 14 March 2007 establishing an Infrastructure for Spatial Information in the European Community (INSPIRE)
Jahard, Y., C. Lemarie, and F. Lecordix. 2003. "The Implementation of a New Technology to Automate Map Generalisation and Incremental Updating Processes." Proceedings of the 21st International Cartographic Conference (ICC), edited by International Cartographic Association, Durban, August 10-16, 1449-1459.
Maugeais, E., F. Lecordix, X. Halbecq, and A. Braun. 2011. "Derivation Cartographique Multi Echelles De La BD Topo De L'ign France: Mise En CEuvre Du Processus De Production De La Nouvelle Carte De Base." Proceedings of the 25th International Cartographic Conference (ICC), Paris, July 2011.
Mustiere, S., and T. Devogele 2008. "Matching Networks with Different Levels of Detail." Geolnformatica 12 (4): 435453. doi: 10.1007/s 10707-007-0040-1.
Open Geospatial Consortium Inc. 2007. "Styled Layer Descriptor Profile of the Web Map Service Implementation Specification." Accessed April 9, 2013. http://www.opengeospatial.org/ standards/sld
Open Geospatial Consortium Inc. 2006. "OpenGIS[R] Web Map Server Implementation Specification." Accessed April 9, 2013. http://www.opengeospatial.org/standards/wms
Sawahata, L. 2001. Color Harmony Workbook. Rockport, MA: Rockport Publishers.
Stoter, J. 2005. "Generalisation within NMA in the 21st Century." Proceedings of the 22nd International Cartographic Conference, A Coruna, July 2005.
S. Lafay (a), A. Braun (a) *, D. Chandler (a), M. Michaud (a), L. Ricaud (a) and S. Mustiere (b)
(a) Departement Geomatique et Cartographie, IGN, D2SI/SIDT, 73 avenue de Paris, 94160 Saint-Mande, France; (b) IGN, Universite Paris Est, Laboratoire COGIT, 73 avenue de Paris, 94160 Saint-Mande, France
* Corresponding author. Email: firstname.lastname@example.org
(Received 13 September 2013; accepted 27 August 2014)
(1.) SCAN 25[R] is a rasterization of the topographic map series traditionally produced in IGN and is the map originally displayed in the French geoportal.
Table 1. Statistics on process duration, September 2012. Number of processed areas 1567 Minimum duration 6 min Areas with process duration < 30 min 92 Maximum duration 20 h Areas with process duration > 6 h 54 Average duration 2 h Total process time (extrapolated to 1567 areas) 124 days
|Printer friendly Cite/link Email Feedback|
|Author:||Lafay, S.; Braun, A.; Chandler, D.; Michaud, M.; Ricaud, L.; Mustiere, S.|
|Publication:||Cartography and Geographic Information Science|
|Date:||Jan 1, 2015|
|Previous Article:||A method based on graphic entity for visualizing complex map symbols on the web.|
|Next Article:||Spatiotemporal patterns of outdoor artificial nighttime lights exposure in the Republic of Korea between 1995 and 2010.|