ArcGIS Enterprise supports a very simple publishing workflow for scene layers; simply upload your scene layer package (slpk) as an item and click publish. ArcGIS Enterprise takes care of the rest. However, if you have very large slpks with GBs or more content, the publishing process could become time-consuming and disk resource intensive.
ArcGIS Enterprise 10.8.1 provides an alternative publishing workflow, specifically designed for those of you who need to publish ginormous amounts of data.
Scene layers that reference content in folder or cloud data stores
ArcGIS Enterprise 10.8.1 adds the ability to directly reference i3s content in folders or cloud data stores. You can now just provide the path to scene content, located in a folder or cloud data store registered with ArcGIS Enterprise, and publish a scene layer using the ArcGIS API for Python. ArcGIS Enterprise will serve tile content directly from the registered data store for scene layers.
Here is what you need to get started:
- Scene content in a ready-to-serve format.
- Following privileges assigned to your role in ArcGIS Enterprise:
- Register data stores
- Create, update and delete content
-
Publish server-based layers
Scene layer packages
A scene layer package (.slpk) encapsulates the geometry, textures, attributes, and metadata for a scene layer into a single file. You can create scene layer packages in ArcGIS Pro, ArcGIS Drone2Map, ArcGIS CityEngine, or acquire them from third parties. The scene layer packages are ideal for archives or for distributing scene content as a single file.
However, direct access to scene layer packages by many concurrent users can introduce performance problems. To improve scalability, we are introducing two new patterns for you to extract data from packages into the format best suited for the type of data store you use. These new patterns enables ArcGIS Enterprise to serve scene layers directly from the data store.
Creating ready-to-serve scene layer content
Download the i3sconverter.exe from GitHub, to extract the i3s content from an slpk to create ready-to-serve scene layer content using version 1.7 of the i3s specification.
This i3sconverter.exe is a nifty command line tool, which provides you the ability to create ready-to-serve scene layer content with the storage format best suited for the destination data store.
User managed data stores
Let us take a moment to determine where the extracted scene layer content should reside. This location would be used to serve scene layer tiles.
You may choose a file system location, or an object store located in the cloud such as Amazon S3, Azure Blob storage or Alibaba OSS. The location you choose will need to be registered as a user-managed data store with ArcGIS Enterprise.
Using folder data stores
When possible. we recommend using file system directories located on SSD devices for high speed content access. Such folder data stores would support the large number of requests that popular scene layers would receive.
Use the code snippet shown below, to extract content from an .slpk into a ready-to-serve format for folder data stores eslpk storage format.
i3s_converter.exe --extract C:\path_to_folder\sample.slpk -d \\unc\path_to_my_data_store
Using cloud data stores
Cloud data stores are recommended when you are looking for data protection, backup and fail-over support, in addition to unlimited storage space.
Use the code snippet shown below to upgrade and extract content from an .slpk version 1.6 or earlier, into the ready-to-serve storage format in cloud data stores i3sREST storage format.
i3s_converter.exe --convert C:\path_to_folder\sample2.slpk -d s3://mySlpkBucket -a MY_ACCESS_KEY -s MY_SECRET_KEY -r us-east-1a
For more usage information on i3sconverter please see the following link.
What’s more!
You can even write your own scene cache with the desired storage format, using the version 1.7 i3s specification available in GitHub.
Using ArcGIS API for Python to publish a scene layer that references scene layer content
Here are the code snippets that you can easily modify to publish your scene layers using ArcGIS API for Python 1.8.2 and up.
If you are new to ArcGIS API for Python, I would highly recommend the blog posts listed in the “Related content” section, at the bottom of this post. These beginner guides set the stage to make you a Python wizard.
To publish your scene layer, follow these four steps:
Updates
You can use the Replace Layer option on the scene layer item to update these layers when new content is available.
Stay tuned. We are working towards a user experience for it and extending this workflow for other large datasets too!
Hello, I use the notebook in ArcGIS Pro 2.6, and update python api to 1.8.2. The first two step is fine, and create a Data Store (Folder) in my Portal and Server. But the third step always come back with ERROR 999999, with the full error as below: Exception Traceback (most recent call last) In [3]: Line 2: desc = ds.describe(item=ds_item.id, server_id=server_id, path=’/’, store_type=’datastore’).result() File C:\arcpy\arcgispro-py3-clone26\lib\site-packages\arcgis\gis\_impl\_jb.py, in result: Line 202: res = self._future.result() File C:\arcpy\arcgispro-py3-clone26\lib\concurrent\futures\_base.py, in result: Line 432: return self.__get_result() File C:\arcpy\arcgispro-py3-clone26\lib\concurrent\futures\_base.py, in __get_result: Line 384: raise self._exception File C:\arcpy\arcgispro-py3-clone26\lib\concurrent\futures\thread.py, in run: Line 56: result = self.fn(*self.args, **self.kwargs) File… Read more »
Hi Karl,
Please ensure the data store path is the folder where the eslpk/i3srest content resides. Do not provide path to eslpk in data store. Please feel free too reach out to me via “connect” option.
Best wishes,
Garima
Hello, In Step 3, When
ds_name = ‘qingxie.eslpk’
for child in desc[‘result’][‘children’]:
if child[‘name’].lower().find(ds_name.lower()) > -1:
break
it turns out an error:
KeyError Traceback (most recent call last)
in
1 ds_name = ‘qingxie.eslpk’
—-> 2 for child in desc[‘result’][‘children’]:
3 if child[‘name’].lower().find(ds_name.lower()) > -1:
4 break
KeyError: ‘result’
my desc[‘definition’] is
{‘operation’: ‘describe’,
‘datastoreId’: ‘f288a585b3224e6091c8707e31c5b764’,
‘serverId’: ‘EOI1WUNp6w8OfMu7’,
‘path’: ‘/’,
‘type’: ‘datastore’}
can you help me?
Hello Kang,
You may choose to skip step 3, if you know the path to your scene content folder inside the data store.
Best Wishes,
Garima
Starting with this release, for the scene service, the i3s content can be stored in a tile cache store in a mode of primary and standby machines or a cluster containing multiple machines, or stored in a folder stores, or stored in a cloud stores. Then, what should I choose when I publish, the tile cache store, the folder store or the cloud store? Do you have the advantages and disadvantages of them? And, for the tile cache store, it says below: https://enterprise.arcgis.com/en/portal/latest/administer/windows/whats-new-data-store.htm However, accessing scene data in this mode can be slow when you use fewer than five machines… Read more »
Sai, ArcGIS Datastore based scene tile cache store is used for: 1. hosted scene layers with associated feature services and 2. hosted scene layers using scene layer package. It supports the following options: • Single machine or primary/standby mode with 2 machines for maximum throughput: Recommended if you think, scene caches content can be accommodated on a single machine • Cluster mode (of odd number of machines): Recommended if you anticipate your scene caches content to grow. The additional machines in cluster help re-balance your scene caches to scale out across multiple machines Please see the following link for detailed… Read more »
Great new option!
After i3Converter explodes the SLPK in the ESLPK shared folder, publishing lasts just a few seconds!
And It gives even more control over what is being published than default DataStore publishing process.
I succesfully published using an Administrative account, but I found an issue using a Publisher account: the python script ends in Exception “Error Code: 400”
The Lucky part is that publishing process seems to end correctly and the Service and Item are Up and Running!
Is that a Python API issue only ? Has anyone else experienced this behavior?
Hello Alessandro,
This seems to be a bug in ArcGIS API for python. We are working to resolve it in the next release. In the meantime the only workaround would be to use administrative credentials to describe the contents of the datastore or to use the REST endpoint of the sharing API described at https://developers.arcgis.com/rest/users-groups-and-items/describe-datastore.htm
We are working to provide a user interface for this workflow in the upcoming release of ArcGIS Enterprise.
Best Wishes,
Garima