Skip to content

incus list takes > 18 seconds likely proportional to number of non-expired snapshots #1837

@chrisjsimpson

Description

@chrisjsimpson

Is there an existing issue for this?

  • There is no existing issue for this bug

Is this happening on an up to date version of Incus?

  • This is happening on a supported version of Incus

Incus system details

`incus info`:


config:
  core.https_address: :8443
api_extensions:
- storage_zfs_remove_snapshots
- container_host_shutdown_timeout
- container_stop_priority
- container_syscall_filtering
- auth_pki
- container_last_used_at
- etag
- patch
- usb_devices
- https_allowed_credentials
- image_compression_algorithm
- directory_manipulation
- container_cpu_time
- storage_zfs_use_refquota
- storage_lvm_mount_options
- network
- profile_usedby
- container_push
- container_exec_recording
- certificate_update
- container_exec_signal_handling
- gpu_devices
- container_image_properties
- migration_progress
- id_map
- network_firewall_filtering
- network_routes
- storage
- file_delete
- file_append
- network_dhcp_expiry
- storage_lvm_vg_rename
- storage_lvm_thinpool_rename
- network_vlan
- image_create_aliases
- container_stateless_copy
- container_only_migration
- storage_zfs_clone_copy
- unix_device_rename
- storage_lvm_use_thinpool
- storage_rsync_bwlimit
- network_vxlan_interface
- storage_btrfs_mount_options
- entity_description
- image_force_refresh
- storage_lvm_lv_resizing
- id_map_base
- file_symlinks
- container_push_target
- network_vlan_physical
- storage_images_delete
- container_edit_metadata
- container_snapshot_stateful_migration
- storage_driver_ceph
- storage_ceph_user_name
- resource_limits
- storage_volatile_initial_source
- storage_ceph_force_osd_reuse
- storage_block_filesystem_btrfs
- resources
- kernel_limits
- storage_api_volume_rename
- network_sriov
- console
- restrict_dev_incus
- migration_pre_copy
- infiniband
- dev_incus_events
- proxy
- network_dhcp_gateway
- file_get_symlink
- network_leases
- unix_device_hotplug
- storage_api_local_volume_handling
- operation_description
- clustering
- event_lifecycle
- storage_api_remote_volume_handling
- nvidia_runtime
- container_mount_propagation
- container_backup
- dev_incus_images
- container_local_cross_pool_handling
- proxy_unix
- proxy_udp
- clustering_join
- proxy_tcp_udp_multi_port_handling
- network_state
- proxy_unix_dac_properties
- container_protection_delete
- unix_priv_drop
- pprof_http
- proxy_haproxy_protocol
- network_hwaddr
- proxy_nat
- network_nat_order
- container_full
- backup_compression
- nvidia_runtime_config
- storage_api_volume_snapshots
- storage_unmapped
- projects
- network_vxlan_ttl
- container_incremental_copy
- usb_optional_vendorid
- snapshot_scheduling
- snapshot_schedule_aliases
- container_copy_project
- clustering_server_address
- clustering_image_replication
- container_protection_shift
- snapshot_expiry
- container_backup_override_pool
- snapshot_expiry_creation
- network_leases_location
- resources_cpu_socket
- resources_gpu
- resources_numa
- kernel_features
- id_map_current
- event_location
- storage_api_remote_volume_snapshots
- network_nat_address
- container_nic_routes
- cluster_internal_copy
- seccomp_notify
- lxc_features
- container_nic_ipvlan
- network_vlan_sriov
- storage_cephfs
- container_nic_ipfilter
- resources_v2
- container_exec_user_group_cwd
- container_syscall_intercept
- container_disk_shift
- storage_shifted
- resources_infiniband
- daemon_storage
- instances
- image_types
- resources_disk_sata
- clustering_roles
- images_expiry
- resources_network_firmware
- backup_compression_algorithm
- ceph_data_pool_name
- container_syscall_intercept_mount
- compression_squashfs
- container_raw_mount
- container_nic_routed
- container_syscall_intercept_mount_fuse
- container_disk_ceph
- virtual-machines
- image_profiles
- clustering_architecture
- resources_disk_id
- storage_lvm_stripes
- vm_boot_priority
- unix_hotplug_devices
- api_filtering
- instance_nic_network
- clustering_sizing
- firewall_driver
- projects_limits
- container_syscall_intercept_hugetlbfs
- limits_hugepages
- container_nic_routed_gateway
- projects_restrictions
- custom_volume_snapshot_expiry
- volume_snapshot_scheduling
- trust_ca_certificates
- snapshot_disk_usage
- clustering_edit_roles
- container_nic_routed_host_address
- container_nic_ipvlan_gateway
- resources_usb_pci
- resources_cpu_threads_numa
- resources_cpu_core_die
- api_os
- container_nic_routed_host_table
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
- resources_system
- images_push_relay
- network_dns_search
- container_nic_routed_limits
- instance_nic_bridged_vlan
- network_state_bond_bridge
- usedby_consistency
- custom_block_volumes
- clustering_failure_domains
- resources_gpu_mdev
- console_vga_type
- projects_limits_disk
- network_type_macvlan
- network_type_sriov
- container_syscall_intercept_bpf_devices
- network_type_ovn
- projects_networks
- projects_networks_restricted_uplinks
- custom_volume_backup
- backup_override_name
- storage_rsync_compression
- network_type_physical
- network_ovn_external_subnets
- network_ovn_nat
- network_ovn_external_routes_remove
- tpm_device_type
- storage_zfs_clone_copy_rebase
- gpu_mdev
- resources_pci_iommu
- resources_network_usb
- resources_disk_address
- network_physical_ovn_ingress_mode
- network_ovn_dhcp
- network_physical_routes_anycast
- projects_limits_instances
- network_state_vlan
- instance_nic_bridged_port_isolation
- instance_bulk_state_change
- network_gvrp
- instance_pool_move
- gpu_sriov
- pci_device_type
- storage_volume_state
- network_acl
- migration_stateful
- disk_state_quota
- storage_ceph_features
- projects_compression
- projects_images_remote_cache_expiry
- certificate_project
- network_ovn_acl
- projects_images_auto_update
- projects_restricted_cluster_target
- images_default_architecture
- network_ovn_acl_defaults
- gpu_mig
- project_usage
- network_bridge_acl
- warnings
- projects_restricted_backups_and_snapshots
- clustering_join_token
- clustering_description
- server_trusted_proxy
- clustering_update_cert
- storage_api_project
- server_instance_driver_operational
- server_supported_storage_drivers
- event_lifecycle_requestor_address
- resources_gpu_usb
- clustering_evacuation
- network_ovn_nat_address
- network_bgp
- network_forward
- custom_volume_refresh
- network_counters_errors_dropped
- metrics
- image_source_project
- clustering_config
- network_peer
- linux_sysctl
- network_dns
- ovn_nic_acceleration
- certificate_self_renewal
- instance_project_move
- storage_volume_project_move
- cloud_init
- network_dns_nat
- database_leader
- instance_all_projects
- clustering_groups
- ceph_rbd_du
- instance_get_full
- qemu_metrics
- gpu_mig_uuid
- event_project
- clustering_evacuation_live
- instance_allow_inconsistent_copy
- network_state_ovn
- storage_volume_api_filtering
- image_restrictions
- storage_zfs_export
- network_dns_records
- storage_zfs_reserve_space
- network_acl_log
- storage_zfs_blocksize
- metrics_cpu_seconds
- instance_snapshot_never
- certificate_token
- instance_nic_routed_neighbor_probe
- event_hub
- agent_nic_config
- projects_restricted_intercept
- metrics_authentication
- images_target_project
- images_all_projects
- cluster_migration_inconsistent_copy
- cluster_ovn_chassis
- container_syscall_intercept_sched_setscheduler
- storage_lvm_thinpool_metadata_size
- storage_volume_state_total
- instance_file_head
- instances_nic_host_name
- image_copy_profile
- container_syscall_intercept_sysinfo
- clustering_evacuation_mode
- resources_pci_vpd
- qemu_raw_conf
- storage_cephfs_fscache
- network_load_balancer
- vsock_api
- instance_ready_state
- network_bgp_holdtime
- storage_volumes_all_projects
- metrics_memory_oom_total
- storage_buckets
- storage_buckets_create_credentials
- metrics_cpu_effective_total
- projects_networks_restricted_access
- storage_buckets_local
- loki
- acme
- internal_metrics
- cluster_join_token_expiry
- remote_token_expiry
- init_preseed
- storage_volumes_created_at
- cpu_hotplug
- projects_networks_zones
- network_txqueuelen
- cluster_member_state
- instances_placement_scriptlet
- storage_pool_source_wipe
- zfs_block_mode
- instance_generation_id
- disk_io_cache
- amd_sev
- storage_pool_loop_resize
- migration_vm_live
- ovn_nic_nesting
- oidc
- network_ovn_l3only
- ovn_nic_acceleration_vdpa
- cluster_healing
- instances_state_total
- auth_user
- security_csm
- instances_rebuild
- numa_cpu_placement
- custom_volume_iso
- network_allocations
- zfs_delegate
- storage_api_remote_volume_snapshot_copy
- operations_get_query_all_projects
- metadata_configuration
- syslog_socket
- event_lifecycle_name_and_project
- instances_nic_limits_priority
- disk_initial_volume_configuration
- operation_wait
- image_restriction_privileged
- cluster_internal_custom_volume_copy
- disk_io_bus
- storage_cephfs_create_missing
- instance_move_config
- ovn_ssl_config
- certificate_description
- disk_io_bus_virtio_blk
- loki_config_instance
- instance_create_start
- clustering_evacuation_stop_options
- boot_host_shutdown_action
- agent_config_drive
- network_state_ovn_lr
- image_template_permissions
- storage_bucket_backup
- storage_lvm_cluster
- shared_custom_block_volumes
- auth_tls_jwt
- oidc_claim
- device_usb_serial
- numa_cpu_balanced
- image_restriction_nesting
- network_integrations
- instance_memory_swap_bytes
- network_bridge_external_create
- network_zones_all_projects
- storage_zfs_vdev
- container_migration_stateful
- profiles_all_projects
- instances_scriptlet_get_instances
- instances_scriptlet_get_cluster_members
- instances_scriptlet_get_project
- network_acl_stateless
- instance_state_started_at
- networks_all_projects
- network_acls_all_projects
- storage_buckets_all_projects
- resources_load
- instance_access
- project_access
- projects_force_delete
- resources_cpu_flags
- disk_io_bus_cache_filesystem
- instances_lxcfs_per_instance
- disk_volume_subpath
- projects_limits_disk_pool
- network_ovn_isolated
- qemu_raw_qmp
- network_load_balancer_health_check
- oidc_scopes
- network_integrations_peer_name
- qemu_scriptlet
- instance_auto_restart
- storage_lvm_metadatasize
- ovn_nic_promiscuous
- ovn_nic_ip_address_none
- instances_state_os_info
- network_load_balancer_state
- instance_nic_macvlan_mode
- storage_lvm_cluster_create
- network_ovn_external_interfaces
- instances_scriptlet_get_instances_count
- cluster_rebalance
- custom_volume_refresh_exclude_older_snapshots
- storage_initial_owner
- storage_live_migration
- instance_console_screenshot
- image_import_alias
- authorization_scriptlet
- console_force
api_status: stable
api_version: "1.0"
auth: trusted
public: false
auth_methods:
- tls
auth_user_name: chris
auth_user_method: unix
environment:
  addresses:
  architectures:
  - x86_64
  - i686
  certificate: 
  certificate_fingerprint: 50f630afbe6d0d92a24b6d0a609088c942e69fbd87adb4d1564961a3bb445bd7
  driver: qemu | lxc
  driver_version: 9.0.4 | 6.0.3
  firewall: nftables
  kernel: Linux
  kernel_architecture: x86_64
  kernel_features:
    idmapped_mounts: "true"
    netnsid_getifaddrs: "true"
    seccomp_listener: "true"
    seccomp_listener_continue: "true"
    uevent_injection: "true"
    unpriv_binfmt: "false"
    unpriv_fscaps: "true"
  kernel_version: 6.1.0-31-amd64
  lxc_features:
    cgroup2: "true"
    core_scheduling: "true"
    devpts_fd: "true"
    idmapped_mounts_v2: "true"
    mount_injection_file: "true"
    network_gateway_device_route: "true"
    network_ipvlan: "true"
    network_l2proxy: "true"
    network_phys_macvlan_mtu: "true"
    network_veth_router: "true"
    pidfd: "true"
    seccomp_allow_deny_syntax: "true"
    seccomp_notify: "true"
    seccomp_proxy_send_notify_fd: "true"
  os_name: Debian GNU/Linux
  os_version: "12"
  project: default
  server: incus
  server_clustered: false
  server_event_mode: full-mesh
  server_name: karma-a
  server_pid: 3452832
  server_version: 6.0.3
  storage: zfs
  storage_version: 2.2.7-1~bpo12+1
  storage_supported_drivers:
  - name: zfs
    version: 2.2.7-1~bpo12+1
    remote: false
  - name: dir
    version: "1"
    remote: false

Instance details

No response

Instance log

No response

Current behavior

@chrisjsimpson can you please file this ( zabbly/incus#76 ) at https://github.com/lxc/incus so it's on the correct repository?

When filing it on the Incus project, it would be useful to show:
- time incus query /1.0/instances
- time incus query /1.0/instances?recursion=1
- time incus query /1.0/instances?recursion=2
- incus monitor --pretty output while running an incus list
```

(tldr: incus list cli and incus /instances (web api to call incus list) becomes extreemly slow (> 18 seconds) when snapshot cadence is very regular. Of course snapshot expiration / cadence can be used to age out- wanting to flag anyway since keen to identify where / if an index may be added to improve the performance of this for those that need/want regular snapshots with a reasonably long expiration). These are zfs backed snapshots.

Based in these time outputs, if recursion level (instances?recursion=2) 1 contains all the data incus list needs, using recursion level 1 or lower will 'solve' this issue:

time incus query /1.0/instances

time incus query /1.0/instances
[
  "/1.0/instances/redacted-instance-1",
  "/1.0/instances/redacted-instance-2",
  "/1.0/instances/redacted-instance-3",
  "/1.0/instances/redacted-instance-4",
  "/1.0/instances/redacted-instance-5",
  "/1.0/instances/redacted-instance-6",
  "/1.0/instances/redacted-instance-7",
  "/1.0/instances/redacted-instance-8",
  "/1.0/instances/redacted-instance-9",
  "/1.0/instances/redacted-instance-10",
  "/1.0/instances/redacted-instance-11",
  "/1.0/instances/redacted-instance-12",
  "/1.0/instances/redacted-instance-13",
  "/1.0/instances/redacted-instance-14",
  "/1.0/instances/redacted-instance-15"
]

real  0m0.027s
user  0m0.015s
sys 0m0.014s

time incus query /1.0/instances?recursion=1

time incus query /1.0/instances?recursion=1
[
  {
    "architecture": "x86_64",
    "config": {
      "image.architecture": "amd64",
      "image.description": "Ubuntu jammy amd64 (20250313_08:09)",
      "image.os": "Ubuntu",
      "image.release": "jammy",
      "image.serial": "20250313_08:09",
      "image.type": "squashfs",
      "image.variant": "default",
      "snapshots.expiry": "5w",
      "snapshots.schedule": "*/3 * * * *",
      "volatile.base_image": "32132a399908c172c23fd325ad35172f176833de7b686724f13b1c569a966447",
      "volatile.cloud-init.instance-id": "bca5d426-bf79-4c71-a6be-3ef7e424b58f",
      "volatile.eth0.host_name": "vethc1d06bb8",
      "volatile.eth0.hwaddr": "00:16:3e:22:31:99",
      "volatile.idmap.base": "0",
      "volatile.idmap.current": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
      "volatile.idmap.next": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
      "volatile.last_state.idmap": "[]",
      "volatile.last_state.power": "RUNNING",
      "volatile.uuid": "7d1d2f78-7479-49cd-bb34-74d0a16d346e",
      "volatile.uuid.generation": "7d1d2f78-7479-49cd-bb34-74d0a16d346e"
    },
    "created_at": "2025-03-14T13:23:34.484747177Z",
    "description": "",
    "devices": {},
    "ephemeral": false,
    "expanded_config": {
      "image.architecture": "amd64",
      "image.description": "Ubuntu jammy amd64 (20250313_08:09)",
      "image.os": "Ubuntu",
      "image.release": "jammy",
      "image.serial": "20250313_08:09",
      "image.type": "squashfs",
      "image.variant": "default",
      "snapshots.expiry": "5w",
      "snapshots.schedule": "*/3 * * * *",
      "volatile.base_image": "32132a399908c172c23fd325ad35172f176833de7b686724f13b1c569a966447",
      "volatile.cloud-init.instance-id": "bca5d426-bf79-4c71-a6be-3ef7e424b58f",
      "volatile.eth0.host_name": "vethc1d06bb8",
      "volatile.eth0.hwaddr": "00:16:3e:22:31:99",
      "volatile.idmap.base": "0",
      "volatile.idmap.current": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
      "volatile.idmap.next": "[{\"Isuid\":true,\"Isgid\":false,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000},{\"Isuid\":false,\"Isgid\":true,\"Hostid\":1000000,\"Nsid\":0,\"Maprange\":1000000000}]",
      "volatile.last_state.idmap": "[]",
      "volatile.last_state.power": "RUNNING",
      "volatile.uuid": "7d1d2f78-7479-49cd-bb34-74d0a16d346e",
      "volatile.uuid.generation": "7d1d2f78-7479-49cd-bb34-74d0a16d346e"
    },
    "expanded_devices": {
      "eth0": {
        "name": "eth0",
        "network": "incusbr0",
        "type": "nic"
      },
      "root": {
        "path": "/",
        "pool": "default-pool",
        "type": "disk"
      }
    },
...

real  0m0.042s
user  0m0.017s
sys 0m0.017s

time incus query /1.0/instances?recursion=2 > out.log

time incus query /1.0/instances?recursion=2 > out.log

real	0m16.875s
user	0m13.812s
sys	0m2.059s

ls -lh out.log 
-rw-r--r-- 1 chris chris 167M Mar 24 14:27 out.log

Note the 167M log which is not suprising given the number of snapshots, the impact though is that the speed of incus list showing is proportionate to the number of snapshots taken. The same is true for the web api call of course.

incus monitor --pretty ` output while running an `incus list

incus monitor --pretty
DEBUG  [2025-03-24T14:30:20Z] Event listener server handler started         id=2a79c33d-ad2d-49d4-b1a0-f87e6b74ba0c local=/var/lib/incus/unix.socket remote=@
DEBUG  [2025-03-24T14:30:27Z] Handling API request                          ip=@ method=GET protocol=unix url=/1.0 username=chris
DEBUG  [2025-03-24T14:30:27Z] Handling API request                          ip=@ method=GET protocol=unix url="/1.0/instances?filter=&recursion=2" username=chris
DEBUG  [2025-03-24T14:30:28Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:28Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:28Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:28Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:28Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:28Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:28Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:28Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:29Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:29Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:29Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:29Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:30Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:30Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:30Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:30Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:30Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:30Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:30Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:30Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:31Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:31Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:31Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:31Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:32Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:32Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:32Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:32Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:32Z] GetInstanceUsage started                      driver=zfs instance=REDACTED pool=default-pool project=default
DEBUG  [2025-03-24T14:30:32Z] GetInstanceUsage finished                     driver=zfs instance=REDACTED pool=default-pool project=default

Note there is a good ~5-10 seccond pause after the last DEBUG message (DEBUG [2025-03-24T14:30:32Z] GetInstanceUsage finished driver=zfs instance=REDACTED pool=default-pool project=default) before the output of incus list is shown, so something else is happening between the last GetInstanceUsage finished log message, and the terminal output of the instances list (guess: The moving/parsing of large > 160MB object back from the api).

--- Detail

For testing I have experimented with a very regular snaoshot cadence accross all instances of every three minutes (at most once every 1 min is suported (docs)).

With 15 instances, there are now upwards of 48 thousand snapshots.

On incus web, the /instances (that path is from memory) api call exceeds 100Mb since the response, helpfully, included all snapshot history- however when you're on a slow internet conenction, the instances list will not load until that api response is returned. After reading the api /contributing docs, I suspect there's a 'filter' or 'depth' which may be added to the call, but I couldn't find the implmentation / where. Another assumption, is there perhaps isn't an index on the snaphots (cowsql?) for the snapshots when they are received which might explain why so slow (unconfirmed).

The impact on incus-ui-canonical is perhaps the most economically costly, given the bandwidth to download the large payload each time (do all the snapshots history have to be included in the response?), whilst the cli response inclus list taking a long time can be frustrating for the impatient, it's server side so has less impact- I tried to understand how to make that faster though I'm more familair with pure sqlite3 than dqlite/cowsql (and its still an assumption this is where the bottleneck lies). Asking for less snapshots to be returned (or less granular information) may be a lower hanging fruit to make the cli response faster (I assume incus list is doing a COUNT(*) or similar on the number of snapshots each time.

Expected behavior

No response

Steps to reproduce

Configure incus with a regular snapshot cadence and reasonably long snapshot expiry to create many snapshots

incus profile show default
config:
  snapshots.expiry: 5w
  snapshots.schedule: '*/3 * * * *'
description: Default Incus profile

Create a few instances with that profile..

Wait until many snapshots have been taken (Fast-forward your clock (Jump in a time machine :) )

Observe the increase in incus list and time incus query /1.0/instances?recursion=2 at level two recursion level.

Metadata

Metadata

Assignees

No one assigned

    Labels

    IncompleteWaiting on more information from reporter

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions