Файловый менеджер - Редактировать - /var/www/xthruster/html/wp-content/uploads/flags/xen.tar
Назад
Kconfig 0000644 00000025257 14722070714 0006066 0 ustar 00 # SPDX-License-Identifier: GPL-2.0-only menu "Xen driver support" depends on XEN config XEN_BALLOON bool "Xen memory balloon driver" default y help The balloon driver allows the Xen domain to request more memory from the system to expand the domain's memory allocation, or alternatively return unneeded memory to the system. config XEN_BALLOON_MEMORY_HOTPLUG bool "Memory hotplug support for Xen balloon driver" depends on XEN_BALLOON && MEMORY_HOTPLUG help Memory hotplug support for Xen balloon driver allows expanding memory available for the system above limit declared at system startup. It is very useful on critical systems which require long run without rebooting. Memory could be hotplugged in following steps: 1) target domain: ensure that memory auto online policy is in effect by checking /sys/devices/system/memory/auto_online_blocks file (should be 'online'). 2) control domain: xl mem-max <target-domain> <maxmem> where <maxmem> is >= requested memory size, 3) control domain: xl mem-set <target-domain> <memory> where <memory> is requested memory size; alternatively memory could be added by writing proper value to /sys/devices/system/xen_memory/xen_memory0/target or /sys/devices/system/xen_memory/xen_memory0/target_kb on the target domain. Alternatively, if memory auto onlining was not requested at step 1 the newly added memory can be manually onlined in the target domain by doing the following: for i in /sys/devices/system/memory/memory*/state; do \ [ "`cat "$i"`" = offline ] && echo online > "$i"; done or by adding the following line to udev rules: SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'" config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT int "Hotplugged memory limit (in GiB) for a PV guest" default 512 if X86_64 default 4 if X86_32 range 0 64 if X86_32 depends on XEN_HAVE_PVMMU depends on XEN_BALLOON_MEMORY_HOTPLUG help Maxmium amount of memory (in GiB) that a PV guest can be expanded to when using memory hotplug. A PV guest can have more memory than this limit if is started with a larger maximum. This value is used to allocate enough space in internal tables needed for physical memory administration. config XEN_SCRUB_PAGES_DEFAULT bool "Scrub pages before returning them to system by default" depends on XEN_BALLOON default y help Scrub pages before returning them to the system for reuse by other domains. This makes sure that any confidential data is not accidentally visible to other domains. It is more secure, but slightly less efficient. This can be controlled with xen_scrub_pages=0 parameter and /sys/devices/system/xen_memory/xen_memory0/scrub_pages. This option only sets the default value. If in doubt, say yes. config XEN_DEV_EVTCHN tristate "Xen /dev/xen/evtchn device" default y help The evtchn driver allows a userspace process to trigger event channels and to receive notification of an event channel firing. If in doubt, say yes. config XEN_BACKEND bool "Backend driver support" default XEN_DOM0 help Support for backend device drivers that provide I/O services to other virtual machines. config XENFS tristate "Xen filesystem" select XEN_PRIVCMD default y help The xen filesystem provides a way for domains to share information with each other and with the hypervisor. For example, by reading and writing the "xenbus" file, guests may pass arbitrary information to the initial domain. If in doubt, say yes. config XEN_COMPAT_XENFS bool "Create compatibility mount point /proc/xen" depends on XENFS default y help The old xenstore userspace tools expect to find "xenbus" under /proc/xen, but "xenbus" is now found at the root of the xenfs filesystem. Selecting this causes the kernel to create the compatibility mount point /proc/xen if it is running on a xen platform. If in doubt, say yes. config XEN_SYS_HYPERVISOR bool "Create xen entries under /sys/hypervisor" depends on SYSFS select SYS_HYPERVISOR default y help Create entries under /sys/hypervisor describing the Xen hypervisor environment. When running native or in another virtual environment, /sys/hypervisor will still be present, but will have no xen contents. config XEN_XENBUS_FRONTEND tristate config XEN_GNTDEV tristate "userspace grant access device driver" depends on XEN default m select MMU_NOTIFIER help Allows userspace processes to use grants. config XEN_GNTDEV_DMABUF bool "Add support for dma-buf grant access device driver extension" depends on XEN_GNTDEV && XEN_GRANT_DMA_ALLOC select DMA_SHARED_BUFFER help Allows userspace processes and kernel modules to use Xen backed dma-buf implementation. With this extension grant references to the pages of an imported dma-buf can be exported for other domain use and grant references coming from a foreign domain can be converted into a local dma-buf for local export. config XEN_GRANT_DEV_ALLOC tristate "User-space grant reference allocator driver" depends on XEN default m help Allows userspace processes to create pages with access granted to other domains. This can be used to implement frontend drivers or as part of an inter-domain shared memory channel. config XEN_GRANT_DMA_ALLOC bool "Allow allocating DMA capable buffers with grant reference module" depends on XEN && HAS_DMA help Extends grant table module API to allow allocating DMA capable buffers and mapping foreign grant references on top of it. The resulting buffer is similar to one allocated by the balloon driver in that proper memory reservation is made by ({increase|decrease}_reservation and VA mappings are updated if needed). This is useful for sharing foreign buffers with HW drivers which cannot work with scattered buffers provided by the balloon driver, but require DMAable memory instead. config SWIOTLB_XEN def_bool y select SWIOTLB config XEN_PCIDEV_BACKEND tristate "Xen PCI-device backend driver" depends on PCI && X86 && XEN depends on XEN_BACKEND default m help The PCI device backend driver allows the kernel to export arbitrary PCI devices to other guests. If you select this to be a module, you will need to make sure no other driver has bound to the device(s) you want to make visible to other guests. The parameter "passthrough" allows you specify how you want the PCI devices to appear in the guest. You can choose the default (0) where PCI topology starts at 00.00.0, or (1) for passthrough if you want the PCI devices topology appear the same as in the host. The "hide" parameter (only applicable if backend driver is compiled into the kernel) allows you to bind the PCI devices to this module from the default device drivers. The argument is the list of PCI BDFs: xen-pciback.hide=(03:00.0)(04:00.0) If in doubt, say m. config XEN_PVCALLS_FRONTEND tristate "XEN PV Calls frontend driver" depends on INET && XEN select XEN_XENBUS_FRONTEND help Experimental frontend for the Xen PV Calls protocol (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It sends a small set of POSIX calls to the backend, which implements them. config XEN_PVCALLS_BACKEND bool "XEN PV Calls backend driver" depends on INET && XEN && XEN_BACKEND help Experimental backend for the Xen PV Calls protocol (https://xenbits.xen.org/docs/unstable/misc/pvcalls.html). It allows PV Calls frontends to send POSIX calls to the backend, which implements them. If in doubt, say n. config XEN_SCSI_BACKEND tristate "XEN SCSI backend driver" depends on XEN && XEN_BACKEND && TARGET_CORE help The SCSI backend driver allows the kernel to export its SCSI Devices to other guests via a high-performance shared-memory interface. Only needed for systems running as XEN driver domains (e.g. Dom0) and if guests need generic access to SCSI devices. config XEN_PRIVCMD tristate depends on XEN default m config XEN_STUB bool "Xen stub drivers" depends on XEN && X86_64 && BROKEN help Allow kernel to install stub drivers, to reserve space for Xen drivers, i.e. memory hotplug and cpu hotplug, and to block native drivers loaded, so that real Xen drivers can be modular. To enable Xen features like cpu and memory hotplug, select Y here. config XEN_ACPI_HOTPLUG_MEMORY tristate "Xen ACPI memory hotplug" depends on XEN_DOM0 && XEN_STUB && ACPI help This is Xen ACPI memory hotplug. Currently Xen only support ACPI memory hot-add. If you want to hot-add memory at runtime (the hot-added memory cannot be removed until machine stop), select Y/M here, otherwise select N. config XEN_ACPI_HOTPLUG_CPU tristate "Xen ACPI cpu hotplug" depends on XEN_DOM0 && XEN_STUB && ACPI select ACPI_CONTAINER help Xen ACPI cpu enumerating and hotplugging For hotplugging, currently Xen only support ACPI cpu hotadd. If you want to hotadd cpu at runtime (the hotadded cpu cannot be removed until machine stop), select Y/M here. config XEN_ACPI_PROCESSOR tristate "Xen ACPI processor" depends on XEN && XEN_DOM0 && X86 && ACPI_PROCESSOR && CPU_FREQ default m help This ACPI processor uploads Power Management information to the Xen hypervisor. To do that the driver parses the Power Management data and uploads said information to the Xen hypervisor. Then the Xen hypervisor can select the proper Cx and Pxx states. It also registers itself as the SMM so that other drivers (such as ACPI cpufreq scaling driver) will not load. To compile this driver as a module, choose M here: the module will be called xen_acpi_processor If you do not know what to choose, select M here. If the CPUFREQ drivers are built in, select Y here. config XEN_MCE_LOG bool "Xen platform mcelog" depends on XEN_DOM0 && X86_64 && X86_MCE help Allow kernel fetching MCE error from Xen platform and converting it into Linux mcelog format for mcelog tools config XEN_HAVE_PVMMU bool config XEN_EFI def_bool y depends on (ARM || ARM64 || X86_64) && EFI config XEN_AUTO_XLATE def_bool y depends on ARM || ARM64 || XEN_PVHVM help Support for auto-translated physmap guests. config XEN_ACPI def_bool y depends on X86 && ACPI config XEN_SYMS bool "Xen symbols" depends on X86 && XEN_DOM0 && XENFS default y if KALLSYMS help Exports hypervisor symbols (along with their types and addresses) via /proc/xen/xensyms file, similar to /proc/kallsyms config XEN_HAVE_VPMU bool config XEN_FRONT_PGDIR_SHBUF tristate endmenu Makefile 0000644 00000003372 14722070714 0006215 0 ustar 00 # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_HOTPLUG_CPU) += cpu_hotplug.o obj-y += grant-table.o features.o balloon.o manage.o preempt.o time.o obj-y += mem-reservation.o obj-y += events/ obj-y += xenbus/ nostackp := $(call cc-option, -fno-stack-protector) CFLAGS_features.o := $(nostackp) dom0-$(CONFIG_ARM64) += arm-device.o dom0-$(CONFIG_PCI) += pci.o dom0-$(CONFIG_USB_SUPPORT) += dbgp.o dom0-$(CONFIG_XEN_ACPI) += acpi.o $(xen-pad-y) xen-pad-$(CONFIG_X86) += xen-acpi-pad.o dom0-$(CONFIG_X86) += pcpu.o obj-$(CONFIG_XEN_DOM0) += $(dom0-y) obj-$(CONFIG_BLOCK) += biomerge.o obj-$(CONFIG_XEN_BALLOON) += xen-balloon.o obj-$(CONFIG_XEN_DEV_EVTCHN) += xen-evtchn.o obj-$(CONFIG_XEN_GNTDEV) += xen-gntdev.o obj-$(CONFIG_XEN_GRANT_DEV_ALLOC) += xen-gntalloc.o obj-$(CONFIG_XENFS) += xenfs/ obj-$(CONFIG_XEN_SYS_HYPERVISOR) += sys-hypervisor.o obj-$(CONFIG_XEN_PVHVM) += platform-pci.o obj-$(CONFIG_SWIOTLB_XEN) += swiotlb-xen.o obj-$(CONFIG_XEN_MCE_LOG) += mcelog.o obj-$(CONFIG_XEN_PCIDEV_BACKEND) += xen-pciback/ obj-$(CONFIG_XEN_PRIVCMD) += xen-privcmd.o obj-$(CONFIG_XEN_STUB) += xen-stub.o obj-$(CONFIG_XEN_ACPI_HOTPLUG_MEMORY) += xen-acpi-memhotplug.o obj-$(CONFIG_XEN_ACPI_HOTPLUG_CPU) += xen-acpi-cpuhotplug.o obj-$(CONFIG_XEN_ACPI_PROCESSOR) += xen-acpi-processor.o obj-$(CONFIG_XEN_EFI) += efi.o obj-$(CONFIG_XEN_SCSI_BACKEND) += xen-scsiback.o obj-$(CONFIG_XEN_AUTO_XLATE) += xlate_mmu.o obj-$(CONFIG_XEN_PVCALLS_BACKEND) += pvcalls-back.o obj-$(CONFIG_XEN_PVCALLS_FRONTEND) += pvcalls-front.o xen-evtchn-y := evtchn.o xen-gntdev-y := gntdev.o xen-gntdev-$(CONFIG_XEN_GNTDEV_DMABUF) += gntdev-dmabuf.o xen-gntalloc-y := gntalloc.o xen-privcmd-y := privcmd.o privcmd-buf.o obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF) += xen-front-pgdir-shbuf.o events/Makefile 0000644 00000000207 14722072766 0007524 0 ustar 00 # SPDX-License-Identifier: GPL-2.0-only obj-y += events.o events-y += events_base.o events-y += events_2l.o events-y += events_fifo.o xen-pciback/Makefile 0000644 00000000413 14722072766 0010403 0 ustar 00 # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_XEN_PCIDEV_BACKEND) += xen-pciback.o xen-pciback-y := pci_stub.o pciback_ops.o xenbus.o xen-pciback-y += conf_space.o conf_space_header.o \ conf_space_capability.o \ conf_space_quirks.o vpci.o \ passthrough.o xenfs/Makefile 0000644 00000000254 14722072766 0007345 0 ustar 00 # SPDX-License-Identifier: GPL-2.0-only obj-$(CONFIG_XENFS) += xenfs.o xenfs-y = super.o xenfs-$(CONFIG_XEN_DOM0) += xenstored.o xenfs-$(CONFIG_XEN_SYMS) += xensyms.o xenbus/Makefile 0000644 00000000654 14722072766 0007532 0 ustar 00 # SPDX-License-Identifier: GPL-2.0 obj-y += xenbus.o obj-y += xenbus_dev_frontend.o xenbus-objs = xenbus-objs += xenbus_client.o xenbus-objs += xenbus_comms.o xenbus-objs += xenbus_xs.o xenbus-objs += xenbus_probe.o xenbus-be-objs-$(CONFIG_XEN_BACKEND) += xenbus_probe_backend.o xenbus-objs += $(xenbus-be-objs-y) obj-$(CONFIG_XEN_BACKEND) += xenbus_dev_backend.o obj-$(CONFIG_XEN_XENBUS_FRONTEND) += xenbus_probe_frontend.o interface/xen-mca.h 0000644 00000025131 14722073410 0010207 0 ustar 00 /****************************************************************************** * arch-x86/mca.h * Guest OS machine check interface to x86 Xen. * * Contributed by Advanced Micro Devices, Inc. * Author: Christoph Egger <Christoph.Egger@amd.com> * * Updated by Intel Corporation * Author: Liu, Jinsong <jinsong.liu@intel.com> * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef __XEN_PUBLIC_ARCH_X86_MCA_H__ #define __XEN_PUBLIC_ARCH_X86_MCA_H__ /* Hypercall */ #define __HYPERVISOR_mca __HYPERVISOR_arch_0 #define XEN_MCA_INTERFACE_VERSION 0x01ecc003 /* IN: Dom0 calls hypercall to retrieve nonurgent error log entry */ #define XEN_MC_NONURGENT 0x1 /* IN: Dom0 calls hypercall to retrieve urgent error log entry */ #define XEN_MC_URGENT 0x2 /* IN: Dom0 acknowledges previosly-fetched error log entry */ #define XEN_MC_ACK 0x4 /* OUT: All is ok */ #define XEN_MC_OK 0x0 /* OUT: Domain could not fetch data. */ #define XEN_MC_FETCHFAILED 0x1 /* OUT: There was no machine check data to fetch. */ #define XEN_MC_NODATA 0x2 #ifndef __ASSEMBLY__ /* vIRQ injected to Dom0 */ #define VIRQ_MCA VIRQ_ARCH_0 /* * mc_info entry types * mca machine check info are recorded in mc_info entries. * when fetch mca info, it can use MC_TYPE_... to distinguish * different mca info. */ #define MC_TYPE_GLOBAL 0 #define MC_TYPE_BANK 1 #define MC_TYPE_EXTENDED 2 #define MC_TYPE_RECOVERY 3 struct mcinfo_common { uint16_t type; /* structure type */ uint16_t size; /* size of this struct in bytes */ }; #define MC_FLAG_CORRECTABLE (1 << 0) #define MC_FLAG_UNCORRECTABLE (1 << 1) #define MC_FLAG_RECOVERABLE (1 << 2) #define MC_FLAG_POLLED (1 << 3) #define MC_FLAG_RESET (1 << 4) #define MC_FLAG_CMCI (1 << 5) #define MC_FLAG_MCE (1 << 6) /* contains x86 global mc information */ struct mcinfo_global { struct mcinfo_common common; uint16_t mc_domid; /* running domain at the time in error */ uint16_t mc_vcpuid; /* virtual cpu scheduled for mc_domid */ uint32_t mc_socketid; /* physical socket of the physical core */ uint16_t mc_coreid; /* physical impacted core */ uint16_t mc_core_threadid; /* core thread of physical core */ uint32_t mc_apicid; uint32_t mc_flags; uint64_t mc_gstatus; /* global status */ }; /* contains x86 bank mc information */ struct mcinfo_bank { struct mcinfo_common common; uint16_t mc_bank; /* bank nr */ uint16_t mc_domid; /* domain referenced by mc_addr if valid */ uint64_t mc_status; /* bank status */ uint64_t mc_addr; /* bank address */ uint64_t mc_misc; uint64_t mc_ctrl2; uint64_t mc_tsc; }; struct mcinfo_msr { uint64_t reg; /* MSR */ uint64_t value; /* MSR value */ }; /* contains mc information from other or additional mc MSRs */ struct mcinfo_extended { struct mcinfo_common common; uint32_t mc_msrs; /* Number of msr with valid values. */ /* * Currently Intel extended MSR (32/64) include all gp registers * and E(R)FLAGS, E(R)IP, E(R)MISC, up to 11/19 of them might be * useful at present. So expand this array to 16/32 to leave room. */ struct mcinfo_msr mc_msr[sizeof(void *) * 4]; }; /* Recovery Action flags. Giving recovery result information to DOM0 */ /* Xen takes successful recovery action, the error is recovered */ #define REC_ACTION_RECOVERED (0x1 << 0) /* No action is performed by XEN */ #define REC_ACTION_NONE (0x1 << 1) /* It's possible DOM0 might take action ownership in some case */ #define REC_ACTION_NEED_RESET (0x1 << 2) /* * Different Recovery Action types, if the action is performed successfully, * REC_ACTION_RECOVERED flag will be returned. */ /* Page Offline Action */ #define MC_ACTION_PAGE_OFFLINE (0x1 << 0) /* CPU offline Action */ #define MC_ACTION_CPU_OFFLINE (0x1 << 1) /* L3 cache disable Action */ #define MC_ACTION_CACHE_SHRINK (0x1 << 2) /* * Below interface used between XEN/DOM0 for passing XEN's recovery action * information to DOM0. */ struct page_offline_action { /* Params for passing the offlined page number to DOM0 */ uint64_t mfn; uint64_t status; }; struct cpu_offline_action { /* Params for passing the identity of the offlined CPU to DOM0 */ uint32_t mc_socketid; uint16_t mc_coreid; uint16_t mc_core_threadid; }; #define MAX_UNION_SIZE 16 struct mcinfo_recovery { struct mcinfo_common common; uint16_t mc_bank; /* bank nr */ uint8_t action_flags; uint8_t action_types; union { struct page_offline_action page_retire; struct cpu_offline_action cpu_offline; uint8_t pad[MAX_UNION_SIZE]; } action_info; }; #define MCINFO_MAXSIZE 768 struct mc_info { /* Number of mcinfo_* entries in mi_data */ uint32_t mi_nentries; uint32_t flags; uint64_t mi_data[(MCINFO_MAXSIZE - 1) / 8]; }; DEFINE_GUEST_HANDLE_STRUCT(mc_info); #define __MC_MSR_ARRAYSIZE 8 #define __MC_MSR_MCGCAP 0 #define __MC_NMSRS 1 #define MC_NCAPS 7 struct mcinfo_logical_cpu { uint32_t mc_cpunr; uint32_t mc_chipid; uint16_t mc_coreid; uint16_t mc_threadid; uint32_t mc_apicid; uint32_t mc_clusterid; uint32_t mc_ncores; uint32_t mc_ncores_active; uint32_t mc_nthreads; uint32_t mc_cpuid_level; uint32_t mc_family; uint32_t mc_vendor; uint32_t mc_model; uint32_t mc_step; char mc_vendorid[16]; char mc_brandid[64]; uint32_t mc_cpu_caps[MC_NCAPS]; uint32_t mc_cache_size; uint32_t mc_cache_alignment; uint32_t mc_nmsrvals; struct mcinfo_msr mc_msrvalues[__MC_MSR_ARRAYSIZE]; }; DEFINE_GUEST_HANDLE_STRUCT(mcinfo_logical_cpu); /* * Prototype: * uint32_t x86_mcinfo_nentries(struct mc_info *mi); */ #define x86_mcinfo_nentries(_mi) \ ((_mi)->mi_nentries) /* * Prototype: * struct mcinfo_common *x86_mcinfo_first(struct mc_info *mi); */ #define x86_mcinfo_first(_mi) \ ((struct mcinfo_common *)(_mi)->mi_data) /* * Prototype: * struct mcinfo_common *x86_mcinfo_next(struct mcinfo_common *mic); */ #define x86_mcinfo_next(_mic) \ ((struct mcinfo_common *)((uint8_t *)(_mic) + (_mic)->size)) /* * Prototype: * void x86_mcinfo_lookup(void *ret, struct mc_info *mi, uint16_t type); */ static inline void x86_mcinfo_lookup(struct mcinfo_common **ret, struct mc_info *mi, uint16_t type) { uint32_t i; struct mcinfo_common *mic; bool found = 0; if (!ret || !mi) return; mic = x86_mcinfo_first(mi); for (i = 0; i < x86_mcinfo_nentries(mi); i++) { if (mic->type == type) { found = 1; break; } mic = x86_mcinfo_next(mic); } *ret = found ? mic : NULL; } /* * Fetch machine check data from hypervisor. */ #define XEN_MC_fetch 1 struct xen_mc_fetch { /* * IN: XEN_MC_NONURGENT, XEN_MC_URGENT, * XEN_MC_ACK if ack'king an earlier fetch * OUT: XEN_MC_OK, XEN_MC_FETCHAILED, XEN_MC_NODATA */ uint32_t flags; uint32_t _pad0; /* OUT: id for ack, IN: id we are ack'ing */ uint64_t fetch_id; /* OUT variables. */ GUEST_HANDLE(mc_info) data; }; DEFINE_GUEST_HANDLE_STRUCT(xen_mc_fetch); /* * This tells the hypervisor to notify a DomU about the machine check error */ #define XEN_MC_notifydomain 2 struct xen_mc_notifydomain { /* IN variables */ uint16_t mc_domid; /* The unprivileged domain to notify */ uint16_t mc_vcpuid; /* The vcpu in mc_domid to notify */ /* IN/OUT variables */ uint32_t flags; }; DEFINE_GUEST_HANDLE_STRUCT(xen_mc_notifydomain); #define XEN_MC_physcpuinfo 3 struct xen_mc_physcpuinfo { /* IN/OUT */ uint32_t ncpus; uint32_t _pad0; /* OUT */ GUEST_HANDLE(mcinfo_logical_cpu) info; }; #define XEN_MC_msrinject 4 #define MC_MSRINJ_MAXMSRS 8 struct xen_mc_msrinject { /* IN */ uint32_t mcinj_cpunr; /* target processor id */ uint32_t mcinj_flags; /* see MC_MSRINJ_F_* below */ uint32_t mcinj_count; /* 0 .. count-1 in array are valid */ uint32_t _pad0; struct mcinfo_msr mcinj_msr[MC_MSRINJ_MAXMSRS]; }; /* Flags for mcinj_flags above; bits 16-31 are reserved */ #define MC_MSRINJ_F_INTERPOSE 0x1 #define XEN_MC_mceinject 5 struct xen_mc_mceinject { unsigned int mceinj_cpunr; /* target processor id */ }; struct xen_mc { uint32_t cmd; uint32_t interface_version; /* XEN_MCA_INTERFACE_VERSION */ union { struct xen_mc_fetch mc_fetch; struct xen_mc_notifydomain mc_notifydomain; struct xen_mc_physcpuinfo mc_physcpuinfo; struct xen_mc_msrinject mc_msrinject; struct xen_mc_mceinject mc_mceinject; } u; }; DEFINE_GUEST_HANDLE_STRUCT(xen_mc); /* Fields are zero when not available */ struct xen_mce { __u64 status; __u64 misc; __u64 addr; __u64 mcgstatus; __u64 ip; __u64 tsc; /* cpu time stamp counter */ __u64 time; /* wall time_t when error was detected */ __u8 cpuvendor; /* cpu vendor as encoded in system.h */ __u8 inject_flags; /* software inject flags */ __u16 pad; __u32 cpuid; /* CPUID 1 EAX */ __u8 cs; /* code segment */ __u8 bank; /* machine check bank */ __u8 cpu; /* cpu number; obsolete; use extcpu now */ __u8 finished; /* entry is valid */ __u32 extcpu; /* linux cpu number that detected the error */ __u32 socketid; /* CPU socket ID */ __u32 apicid; /* CPU initial apic ID */ __u64 mcgcap; /* MCGCAP MSR: machine check capabilities of CPU */ }; /* * This structure contains all data related to the MCE log. Also * carries a signature to make it easier to find from external * debugging tools. Each entry is only valid when its finished flag * is set. */ #define XEN_MCE_LOG_LEN 32 struct xen_mce_log { char signature[12]; /* "MACHINECHECK" */ unsigned len; /* = XEN_MCE_LOG_LEN */ unsigned next; unsigned flags; unsigned recordlen; /* length of struct xen_mce */ struct xen_mce entry[XEN_MCE_LOG_LEN]; }; #define XEN_MCE_OVERFLOW 0 /* bit 0 in flags means overflow */ #define XEN_MCE_LOG_SIGNATURE "MACHINECHECK" #define MCE_GET_RECORD_LEN _IOR('M', 1, int) #define MCE_GET_LOG_LEN _IOR('M', 2, int) #define MCE_GETCLEAR_FLAGS _IOR('M', 3, int) #endif /* __ASSEMBLY__ */ #endif /* __XEN_PUBLIC_ARCH_X86_MCA_H__ */ interface/elfnote.h 0000644 00000014633 14722073410 0010320 0 ustar 00 /****************************************************************************** * elfnote.h * * Definitions used for the Xen ELF notes. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (c) 2006, Ian Campbell, XenSource Ltd. */ #ifndef __XEN_PUBLIC_ELFNOTE_H__ #define __XEN_PUBLIC_ELFNOTE_H__ /* * The notes should live in a SHT_NOTE segment and have "Xen" in the * name field. * * Numeric types are either 4 or 8 bytes depending on the content of * the desc field. * * LEGACY indicated the fields in the legacy __xen_guest string which * this a note type replaces. * * String values (for non-legacy) are NULL terminated ASCII, also known * as ASCIZ type. */ /* * NAME=VALUE pair (string). */ #define XEN_ELFNOTE_INFO 0 /* * The virtual address of the entry point (numeric). * * LEGACY: VIRT_ENTRY */ #define XEN_ELFNOTE_ENTRY 1 /* The virtual address of the hypercall transfer page (numeric). * * LEGACY: HYPERCALL_PAGE. (n.b. legacy value is a physical page * number not a virtual address) */ #define XEN_ELFNOTE_HYPERCALL_PAGE 2 /* The virtual address where the kernel image should be mapped (numeric). * * Defaults to 0. * * LEGACY: VIRT_BASE */ #define XEN_ELFNOTE_VIRT_BASE 3 /* * The offset of the ELF paddr field from the acutal required * pseudo-physical address (numeric). * * This is used to maintain backwards compatibility with older kernels * which wrote __PAGE_OFFSET into that field. This field defaults to 0 * if not present. * * LEGACY: ELF_PADDR_OFFSET. (n.b. legacy default is VIRT_BASE) */ #define XEN_ELFNOTE_PADDR_OFFSET 4 /* * The version of Xen that we work with (string). * * LEGACY: XEN_VER */ #define XEN_ELFNOTE_XEN_VERSION 5 /* * The name of the guest operating system (string). * * LEGACY: GUEST_OS */ #define XEN_ELFNOTE_GUEST_OS 6 /* * The version of the guest operating system (string). * * LEGACY: GUEST_VER */ #define XEN_ELFNOTE_GUEST_VERSION 7 /* * The loader type (string). * * LEGACY: LOADER */ #define XEN_ELFNOTE_LOADER 8 /* * The kernel supports PAE (x86/32 only, string = "yes" or "no"). * * LEGACY: PAE (n.b. The legacy interface included a provision to * indicate 'extended-cr3' support allowing L3 page tables to be * placed above 4G. It is assumed that any kernel new enough to use * these ELF notes will include this and therefore "yes" here is * equivalent to "yes[entended-cr3]" in the __xen_guest interface. */ #define XEN_ELFNOTE_PAE_MODE 9 /* * The features supported/required by this kernel (string). * * The string must consist of a list of feature names (as given in * features.h, without the "XENFEAT_" prefix) separated by '|' * characters. If a feature is required for the kernel to function * then the feature name must be preceded by a '!' character. * * LEGACY: FEATURES */ #define XEN_ELFNOTE_FEATURES 10 /* * The kernel requires the symbol table to be loaded (string = "yes" or "no") * LEGACY: BSD_SYMTAB (n.b. The legacy treated the presence or absence * of this string as a boolean flag rather than requiring "yes" or * "no". */ #define XEN_ELFNOTE_BSD_SYMTAB 11 /* * The lowest address the hypervisor hole can begin at (numeric). * * This must not be set higher than HYPERVISOR_VIRT_START. Its presence * also indicates to the hypervisor that the kernel can deal with the * hole starting at a higher address. */ #define XEN_ELFNOTE_HV_START_LOW 12 /* * List of maddr_t-sized mask/value pairs describing how to recognize * (non-present) L1 page table entries carrying valid MFNs (numeric). */ #define XEN_ELFNOTE_L1_MFN_VALID 13 /* * Whether or not the guest supports cooperative suspend cancellation. * This is a numeric value. * * Default is 0 */ #define XEN_ELFNOTE_SUSPEND_CANCEL 14 /* * The (non-default) location the initial phys-to-machine map should be * placed at by the hypervisor (Dom0) or the tools (DomU). * The kernel must be prepared for this mapping to be established using * large pages, despite such otherwise not being available to guests. * The kernel must also be able to handle the page table pages used for * this mapping not being accessible through the initial mapping. * (Only x86-64 supports this at present.) */ #define XEN_ELFNOTE_INIT_P2M 15 /* * Whether or not the guest can deal with being passed an initrd not * mapped through its initial page tables. */ #define XEN_ELFNOTE_MOD_START_PFN 16 /* * The features supported by this kernel (numeric). * * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a * kernel to specify support for features that older hypervisors don't * know about. The set of features 4.2 and newer hypervisors will * consider supported by the kernel is the combination of the sets * specified through this and the string note. * * LEGACY: FEATURES */ #define XEN_ELFNOTE_SUPPORTED_FEATURES 17 /* * Physical entry point into the kernel. * * 32bit entry point into the kernel. When requested to launch the * guest kernel in a HVM container, Xen will use this entry point to * launch the guest in 32bit protected mode with paging disabled. * Ignored otherwise. */ #define XEN_ELFNOTE_PHYS32_ENTRY 18 /* * The number of the highest elfnote defined. */ #define XEN_ELFNOTE_MAX XEN_ELFNOTE_PHYS32_ENTRY #endif /* __XEN_PUBLIC_ELFNOTE_H__ */ /* * Local variables: * mode: C * c-set-style: "BSD" * c-basic-offset: 4 * tab-width: 4 * indent-tabs-mode: nil * End: */ interface/event_channel.h 0000644 00000016744 14722073410 0011502 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /****************************************************************************** * event_channel.h * * Event channels between domains. * * Copyright (c) 2003-2004, K A Fraser. */ #ifndef __XEN_PUBLIC_EVENT_CHANNEL_H__ #define __XEN_PUBLIC_EVENT_CHANNEL_H__ #include <xen/interface/xen.h> typedef uint32_t evtchn_port_t; DEFINE_GUEST_HANDLE(evtchn_port_t); /* * EVTCHNOP_alloc_unbound: Allocate a port in domain <dom> and mark as * accepting interdomain bindings from domain <remote_dom>. A fresh port * is allocated in <dom> and returned as <port>. * NOTES: * 1. If the caller is unprivileged then <dom> must be DOMID_SELF. * 2. <rdom> may be DOMID_SELF, allowing loopback connections. */ #define EVTCHNOP_alloc_unbound 6 struct evtchn_alloc_unbound { /* IN parameters */ domid_t dom, remote_dom; /* OUT parameters */ evtchn_port_t port; }; /* * EVTCHNOP_bind_interdomain: Construct an interdomain event channel between * the calling domain and <remote_dom>. <remote_dom,remote_port> must identify * a port that is unbound and marked as accepting bindings from the calling * domain. A fresh port is allocated in the calling domain and returned as * <local_port>. * NOTES: * 2. <remote_dom> may be DOMID_SELF, allowing loopback connections. */ #define EVTCHNOP_bind_interdomain 0 struct evtchn_bind_interdomain { /* IN parameters. */ domid_t remote_dom; evtchn_port_t remote_port; /* OUT parameters. */ evtchn_port_t local_port; }; /* * EVTCHNOP_bind_virq: Bind a local event channel to VIRQ <irq> on specified * vcpu. * NOTES: * 1. A virtual IRQ may be bound to at most one event channel per vcpu. * 2. The allocated event channel is bound to the specified vcpu. The binding * may not be changed. */ #define EVTCHNOP_bind_virq 1 struct evtchn_bind_virq { /* IN parameters. */ uint32_t virq; uint32_t vcpu; /* OUT parameters. */ evtchn_port_t port; }; /* * EVTCHNOP_bind_pirq: Bind a local event channel to PIRQ <irq>. * NOTES: * 1. A physical IRQ may be bound to at most one event channel per domain. * 2. Only a sufficiently-privileged domain may bind to a physical IRQ. */ #define EVTCHNOP_bind_pirq 2 struct evtchn_bind_pirq { /* IN parameters. */ uint32_t pirq; #define BIND_PIRQ__WILL_SHARE 1 uint32_t flags; /* BIND_PIRQ__* */ /* OUT parameters. */ evtchn_port_t port; }; /* * EVTCHNOP_bind_ipi: Bind a local event channel to receive events. * NOTES: * 1. The allocated event channel is bound to the specified vcpu. The binding * may not be changed. */ #define EVTCHNOP_bind_ipi 7 struct evtchn_bind_ipi { uint32_t vcpu; /* OUT parameters. */ evtchn_port_t port; }; /* * EVTCHNOP_close: Close a local event channel <port>. If the channel is * interdomain then the remote end is placed in the unbound state * (EVTCHNSTAT_unbound), awaiting a new connection. */ #define EVTCHNOP_close 3 struct evtchn_close { /* IN parameters. */ evtchn_port_t port; }; /* * EVTCHNOP_send: Send an event to the remote end of the channel whose local * endpoint is <port>. */ #define EVTCHNOP_send 4 struct evtchn_send { /* IN parameters. */ evtchn_port_t port; }; /* * EVTCHNOP_status: Get the current status of the communication channel which * has an endpoint at <dom, port>. * NOTES: * 1. <dom> may be specified as DOMID_SELF. * 2. Only a sufficiently-privileged domain may obtain the status of an event * channel for which <dom> is not DOMID_SELF. */ #define EVTCHNOP_status 5 struct evtchn_status { /* IN parameters */ domid_t dom; evtchn_port_t port; /* OUT parameters */ #define EVTCHNSTAT_closed 0 /* Channel is not in use. */ #define EVTCHNSTAT_unbound 1 /* Channel is waiting interdom connection.*/ #define EVTCHNSTAT_interdomain 2 /* Channel is connected to remote domain. */ #define EVTCHNSTAT_pirq 3 /* Channel is bound to a phys IRQ line. */ #define EVTCHNSTAT_virq 4 /* Channel is bound to a virtual IRQ line */ #define EVTCHNSTAT_ipi 5 /* Channel is bound to a virtual IPI line */ uint32_t status; uint32_t vcpu; /* VCPU to which this channel is bound. */ union { struct { domid_t dom; } unbound; /* EVTCHNSTAT_unbound */ struct { domid_t dom; evtchn_port_t port; } interdomain; /* EVTCHNSTAT_interdomain */ uint32_t pirq; /* EVTCHNSTAT_pirq */ uint32_t virq; /* EVTCHNSTAT_virq */ } u; }; /* * EVTCHNOP_bind_vcpu: Specify which vcpu a channel should notify when an * event is pending. * NOTES: * 1. IPI- and VIRQ-bound channels always notify the vcpu that initialised * the binding. This binding cannot be changed. * 2. All other channels notify vcpu0 by default. This default is set when * the channel is allocated (a port that is freed and subsequently reused * has its binding reset to vcpu0). */ #define EVTCHNOP_bind_vcpu 8 struct evtchn_bind_vcpu { /* IN parameters. */ evtchn_port_t port; uint32_t vcpu; }; /* * EVTCHNOP_unmask: Unmask the specified local event-channel port and deliver * a notification to the appropriate VCPU if an event is pending. */ #define EVTCHNOP_unmask 9 struct evtchn_unmask { /* IN parameters. */ evtchn_port_t port; }; /* * EVTCHNOP_reset: Close all event channels associated with specified domain. * NOTES: * 1. <dom> may be specified as DOMID_SELF. * 2. Only a sufficiently-privileged domain may specify other than DOMID_SELF. */ #define EVTCHNOP_reset 10 struct evtchn_reset { /* IN parameters. */ domid_t dom; }; typedef struct evtchn_reset evtchn_reset_t; /* * EVTCHNOP_init_control: initialize the control block for the FIFO ABI. */ #define EVTCHNOP_init_control 11 struct evtchn_init_control { /* IN parameters. */ uint64_t control_gfn; uint32_t offset; uint32_t vcpu; /* OUT parameters. */ uint8_t link_bits; uint8_t _pad[7]; }; /* * EVTCHNOP_expand_array: add an additional page to the event array. */ #define EVTCHNOP_expand_array 12 struct evtchn_expand_array { /* IN parameters. */ uint64_t array_gfn; }; /* * EVTCHNOP_set_priority: set the priority for an event channel. */ #define EVTCHNOP_set_priority 13 struct evtchn_set_priority { /* IN parameters. */ uint32_t port; uint32_t priority; }; struct evtchn_op { uint32_t cmd; /* EVTCHNOP_* */ union { struct evtchn_alloc_unbound alloc_unbound; struct evtchn_bind_interdomain bind_interdomain; struct evtchn_bind_virq bind_virq; struct evtchn_bind_pirq bind_pirq; struct evtchn_bind_ipi bind_ipi; struct evtchn_close close; struct evtchn_send send; struct evtchn_status status; struct evtchn_bind_vcpu bind_vcpu; struct evtchn_unmask unmask; } u; }; DEFINE_GUEST_HANDLE_STRUCT(evtchn_op); /* * 2-level ABI */ #define EVTCHN_2L_NR_CHANNELS (sizeof(xen_ulong_t) * sizeof(xen_ulong_t) * 64) /* * FIFO ABI */ /* Events may have priorities from 0 (highest) to 15 (lowest). */ #define EVTCHN_FIFO_PRIORITY_MAX 0 #define EVTCHN_FIFO_PRIORITY_DEFAULT 7 #define EVTCHN_FIFO_PRIORITY_MIN 15 #define EVTCHN_FIFO_MAX_QUEUES (EVTCHN_FIFO_PRIORITY_MIN + 1) typedef uint32_t event_word_t; #define EVTCHN_FIFO_PENDING 31 #define EVTCHN_FIFO_MASKED 30 #define EVTCHN_FIFO_LINKED 29 #define EVTCHN_FIFO_BUSY 28 #define EVTCHN_FIFO_LINK_BITS 17 #define EVTCHN_FIFO_LINK_MASK ((1 << EVTCHN_FIFO_LINK_BITS) - 1) #define EVTCHN_FIFO_NR_CHANNELS (1 << EVTCHN_FIFO_LINK_BITS) struct evtchn_fifo_control_block { uint32_t ready; uint32_t _rsvd; event_word_t head[EVTCHN_FIFO_MAX_QUEUES]; }; #endif /* __XEN_PUBLIC_EVENT_CHANNEL_H__ */ interface/physdev.h 0000644 00000020361 14722073410 0010341 0 ustar 00 /* * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef __XEN_PUBLIC_PHYSDEV_H__ #define __XEN_PUBLIC_PHYSDEV_H__ /* * Prototype for this hypercall is: * int physdev_op(int cmd, void *args) * @cmd == PHYSDEVOP_??? (physdev operation). * @args == Operation-specific extra arguments (NULL if none). */ /* * Notify end-of-interrupt (EOI) for the specified IRQ. * @arg == pointer to physdev_eoi structure. */ #define PHYSDEVOP_eoi 12 struct physdev_eoi { /* IN */ uint32_t irq; }; /* * Register a shared page for the hypervisor to indicate whether the guest * must issue PHYSDEVOP_eoi. The semantics of PHYSDEVOP_eoi change slightly * once the guest used this function in that the associated event channel * will automatically get unmasked. The page registered is used as a bit * array indexed by Xen's PIRQ value. */ #define PHYSDEVOP_pirq_eoi_gmfn_v1 17 /* * Register a shared page for the hypervisor to indicate whether the * guest must issue PHYSDEVOP_eoi. This hypercall is very similar to * PHYSDEVOP_pirq_eoi_gmfn_v1 but it doesn't change the semantics of * PHYSDEVOP_eoi. The page registered is used as a bit array indexed by * Xen's PIRQ value. */ #define PHYSDEVOP_pirq_eoi_gmfn_v2 28 struct physdev_pirq_eoi_gmfn { /* IN */ xen_ulong_t gmfn; }; /* * Query the status of an IRQ line. * @arg == pointer to physdev_irq_status_query structure. */ #define PHYSDEVOP_irq_status_query 5 struct physdev_irq_status_query { /* IN */ uint32_t irq; /* OUT */ uint32_t flags; /* XENIRQSTAT_* */ }; /* Need to call PHYSDEVOP_eoi when the IRQ has been serviced? */ #define _XENIRQSTAT_needs_eoi (0) #define XENIRQSTAT_needs_eoi (1U<<_XENIRQSTAT_needs_eoi) /* IRQ shared by multiple guests? */ #define _XENIRQSTAT_shared (1) #define XENIRQSTAT_shared (1U<<_XENIRQSTAT_shared) /* * Set the current VCPU's I/O privilege level. * @arg == pointer to physdev_set_iopl structure. */ #define PHYSDEVOP_set_iopl 6 struct physdev_set_iopl { /* IN */ uint32_t iopl; }; /* * Set the current VCPU's I/O-port permissions bitmap. * @arg == pointer to physdev_set_iobitmap structure. */ #define PHYSDEVOP_set_iobitmap 7 struct physdev_set_iobitmap { /* IN */ uint8_t * bitmap; uint32_t nr_ports; }; /* * Read or write an IO-APIC register. * @arg == pointer to physdev_apic structure. */ #define PHYSDEVOP_apic_read 8 #define PHYSDEVOP_apic_write 9 struct physdev_apic { /* IN */ unsigned long apic_physbase; uint32_t reg; /* IN or OUT */ uint32_t value; }; /* * Allocate or free a physical upcall vector for the specified IRQ line. * @arg == pointer to physdev_irq structure. */ #define PHYSDEVOP_alloc_irq_vector 10 #define PHYSDEVOP_free_irq_vector 11 struct physdev_irq { /* IN */ uint32_t irq; /* IN or OUT */ uint32_t vector; }; #define MAP_PIRQ_TYPE_MSI 0x0 #define MAP_PIRQ_TYPE_GSI 0x1 #define MAP_PIRQ_TYPE_UNKNOWN 0x2 #define MAP_PIRQ_TYPE_MSI_SEG 0x3 #define MAP_PIRQ_TYPE_MULTI_MSI 0x4 #define PHYSDEVOP_map_pirq 13 struct physdev_map_pirq { domid_t domid; /* IN */ int type; /* IN */ int index; /* IN or OUT */ int pirq; /* IN - high 16 bits hold segment for ..._MSI_SEG and ..._MULTI_MSI */ int bus; /* IN */ int devfn; /* IN * - For MSI-X contains entry number. * - For MSI with ..._MULTI_MSI contains number of vectors. * OUT (..._MULTI_MSI only) * - Number of vectors allocated. */ int entry_nr; /* IN */ uint64_t table_base; }; #define PHYSDEVOP_unmap_pirq 14 struct physdev_unmap_pirq { domid_t domid; /* IN */ int pirq; }; #define PHYSDEVOP_manage_pci_add 15 #define PHYSDEVOP_manage_pci_remove 16 struct physdev_manage_pci { /* IN */ uint8_t bus; uint8_t devfn; }; #define PHYSDEVOP_restore_msi 19 struct physdev_restore_msi { /* IN */ uint8_t bus; uint8_t devfn; }; #define PHYSDEVOP_manage_pci_add_ext 20 struct physdev_manage_pci_ext { /* IN */ uint8_t bus; uint8_t devfn; unsigned is_extfn; unsigned is_virtfn; struct { uint8_t bus; uint8_t devfn; } physfn; }; /* * Argument to physdev_op_compat() hypercall. Superceded by new physdev_op() * hypercall since 0x00030202. */ struct physdev_op { uint32_t cmd; union { struct physdev_irq_status_query irq_status_query; struct physdev_set_iopl set_iopl; struct physdev_set_iobitmap set_iobitmap; struct physdev_apic apic_op; struct physdev_irq irq_op; } u; }; #define PHYSDEVOP_setup_gsi 21 struct physdev_setup_gsi { int gsi; /* IN */ uint8_t triggering; /* IN */ uint8_t polarity; /* IN */ }; #define PHYSDEVOP_get_nr_pirqs 22 struct physdev_nr_pirqs { /* OUT */ uint32_t nr_pirqs; }; /* type is MAP_PIRQ_TYPE_GSI or MAP_PIRQ_TYPE_MSI * the hypercall returns a free pirq */ #define PHYSDEVOP_get_free_pirq 23 struct physdev_get_free_pirq { /* IN */ int type; /* OUT */ uint32_t pirq; }; #define XEN_PCI_DEV_EXTFN 0x1 #define XEN_PCI_DEV_VIRTFN 0x2 #define XEN_PCI_DEV_PXM 0x4 #define XEN_PCI_MMCFG_RESERVED 0x1 #define PHYSDEVOP_pci_mmcfg_reserved 24 struct physdev_pci_mmcfg_reserved { uint64_t address; uint16_t segment; uint8_t start_bus; uint8_t end_bus; uint32_t flags; }; #define PHYSDEVOP_pci_device_add 25 struct physdev_pci_device_add { /* IN */ uint16_t seg; uint8_t bus; uint8_t devfn; uint32_t flags; struct { uint8_t bus; uint8_t devfn; } physfn; #if defined(__STDC_VERSION__) && __STDC_VERSION__ >= 199901L uint32_t optarr[]; #elif defined(__GNUC__) uint32_t optarr[0]; #endif }; #define PHYSDEVOP_pci_device_remove 26 #define PHYSDEVOP_restore_msi_ext 27 /* * Dom0 should use these two to announce MMIO resources assigned to * MSI-X capable devices won't (prepare) or may (release) change. */ #define PHYSDEVOP_prepare_msix 30 #define PHYSDEVOP_release_msix 31 struct physdev_pci_device { /* IN */ uint16_t seg; uint8_t bus; uint8_t devfn; }; #define PHYSDEVOP_DBGP_RESET_PREPARE 1 #define PHYSDEVOP_DBGP_RESET_DONE 2 #define PHYSDEVOP_DBGP_BUS_UNKNOWN 0 #define PHYSDEVOP_DBGP_BUS_PCI 1 #define PHYSDEVOP_dbgp_op 29 struct physdev_dbgp_op { /* IN */ uint8_t op; uint8_t bus; union { struct physdev_pci_device pci; } u; }; /* * Notify that some PIRQ-bound event channels have been unmasked. * ** This command is obsolete since interface version 0x00030202 and is ** * ** unsupported by newer versions of Xen. ** */ #define PHYSDEVOP_IRQ_UNMASK_NOTIFY 4 /* * These all-capitals physdev operation names are superceded by the new names * (defined above) since interface version 0x00030202. */ #define PHYSDEVOP_IRQ_STATUS_QUERY PHYSDEVOP_irq_status_query #define PHYSDEVOP_SET_IOPL PHYSDEVOP_set_iopl #define PHYSDEVOP_SET_IOBITMAP PHYSDEVOP_set_iobitmap #define PHYSDEVOP_APIC_READ PHYSDEVOP_apic_read #define PHYSDEVOP_APIC_WRITE PHYSDEVOP_apic_write #define PHYSDEVOP_ASSIGN_VECTOR PHYSDEVOP_alloc_irq_vector #define PHYSDEVOP_FREE_VECTOR PHYSDEVOP_free_irq_vector #define PHYSDEVOP_IRQ_NEEDS_UNMASK_NOTIFY XENIRQSTAT_needs_eoi #define PHYSDEVOP_IRQ_SHARED XENIRQSTAT_shared #endif /* __XEN_PUBLIC_PHYSDEV_H__ */ interface/io/vscsiif.h 0000644 00000021314 14722073410 0010733 0 ustar 00 /****************************************************************************** * vscsiif.h * * Based on the blkif.h code. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright(c) FUJITSU Limited 2008. */ #ifndef __XEN__PUBLIC_IO_SCSI_H__ #define __XEN__PUBLIC_IO_SCSI_H__ #include "ring.h" #include "../grant_table.h" /* * Feature and Parameter Negotiation * ================================= * The two halves of a Xen pvSCSI driver utilize nodes within the XenStore to * communicate capabilities and to negotiate operating parameters. This * section enumerates these nodes which reside in the respective front and * backend portions of the XenStore, following the XenBus convention. * * Any specified default value is in effect if the corresponding XenBus node * is not present in the XenStore. * * XenStore nodes in sections marked "PRIVATE" are solely for use by the * driver side whose XenBus tree contains them. * ***************************************************************************** * Backend XenBus Nodes ***************************************************************************** * *------------------ Backend Device Identification (PRIVATE) ------------------ * * p-devname * Values: string * * A free string used to identify the physical device (e.g. a disk name). * * p-dev * Values: string * * A string specifying the backend device: either a 4-tuple "h:c:t:l" * (host, controller, target, lun, all integers), or a WWN (e.g. * "naa.60014054ac780582"). * * v-dev * Values: string * * A string specifying the frontend device in form of a 4-tuple "h:c:t:l" * (host, controller, target, lun, all integers). * *--------------------------------- Features --------------------------------- * * feature-sg-grant * Values: unsigned [VSCSIIF_SG_TABLESIZE...65535] * Default Value: 0 * * Specifies the maximum number of scatter/gather elements in grant pages * supported. If not set, the backend supports up to VSCSIIF_SG_TABLESIZE * SG elements specified directly in the request. * ***************************************************************************** * Frontend XenBus Nodes ***************************************************************************** * *----------------------- Request Transport Parameters ----------------------- * * event-channel * Values: unsigned * * The identifier of the Xen event channel used to signal activity * in the ring buffer. * * ring-ref * Values: unsigned * * The Xen grant reference granting permission for the backend to map * the sole page in a single page sized ring buffer. * * protocol * Values: string (XEN_IO_PROTO_ABI_*) * Default Value: XEN_IO_PROTO_ABI_NATIVE * * The machine ABI rules governing the format of all ring request and * response structures. */ /* Requests from the frontend to the backend */ /* * Request a SCSI operation specified via a CDB in vscsiif_request.cmnd. * The target is specified via channel, id and lun. * * The operation to be performed is specified via a CDB in cmnd[], the length * of the CDB is in cmd_len. sc_data_direction specifies the direction of data * (to the device, from the device, or none at all). * * If data is to be transferred to or from the device the buffer(s) in the * guest memory is/are specified via one or multiple scsiif_request_segment * descriptors each specifying a memory page via a grant_ref_t, a offset into * the page and the length of the area in that page. All scsiif_request_segment * areas concatenated form the resulting data buffer used by the operation. * If the number of scsiif_request_segment areas is not too large (less than * or equal VSCSIIF_SG_TABLESIZE) the areas can be specified directly in the * seg[] array and the number of valid scsiif_request_segment elements is to be * set in nr_segments. * * If "feature-sg-grant" in the Xenstore is set it is possible to specify more * than VSCSIIF_SG_TABLESIZE scsiif_request_segment elements via indirection. * The maximum number of allowed scsiif_request_segment elements is the value * of the "feature-sg-grant" entry from Xenstore. When using indirection the * seg[] array doesn't contain specifications of the data buffers, but * references to scsiif_request_segment arrays, which in turn reference the * data buffers. While nr_segments holds the number of populated seg[] entries * (plus the set VSCSIIF_SG_GRANT bit), the number of scsiif_request_segment * elements referencing the target data buffers is calculated from the lengths * of the seg[] elements (the sum of all valid seg[].length divided by the * size of one scsiif_request_segment structure). */ #define VSCSIIF_ACT_SCSI_CDB 1 /* * Request abort of a running operation for the specified target given by * channel, id, lun and the operation's rqid in ref_rqid. */ #define VSCSIIF_ACT_SCSI_ABORT 2 /* * Request a device reset of the specified target (channel and id). */ #define VSCSIIF_ACT_SCSI_RESET 3 /* * Preset scatter/gather elements for a following request. Deprecated. * Keeping the define only to avoid usage of the value "4" for other actions. */ #define VSCSIIF_ACT_SCSI_SG_PRESET 4 /* * Maximum scatter/gather segments per request. * * Considering balance between allocating at least 16 "vscsiif_request" * structures on one page (4096 bytes) and the number of scatter/gather * elements needed, we decided to use 26 as a magic number. * * If "feature-sg-grant" is set, more scatter/gather elements can be specified * by placing them in one or more (up to VSCSIIF_SG_TABLESIZE) granted pages. * In this case the vscsiif_request seg elements don't contain references to * the user data, but to the SG elements referencing the user data. */ #define VSCSIIF_SG_TABLESIZE 26 /* * based on Linux kernel 2.6.18, still valid * Changing these values requires support of multiple protocols via the rings * as "old clients" will blindly use these values and the resulting structure * sizes. */ #define VSCSIIF_MAX_COMMAND_SIZE 16 #define VSCSIIF_SENSE_BUFFERSIZE 96 struct scsiif_request_segment { grant_ref_t gref; uint16_t offset; uint16_t length; }; #define VSCSIIF_SG_PER_PAGE (PAGE_SIZE / sizeof(struct scsiif_request_segment)) /* Size of one request is 252 bytes */ struct vscsiif_request { uint16_t rqid; /* private guest value, echoed in resp */ uint8_t act; /* command between backend and frontend */ uint8_t cmd_len; /* valid CDB bytes */ uint8_t cmnd[VSCSIIF_MAX_COMMAND_SIZE]; /* the CDB */ uint16_t timeout_per_command; /* deprecated */ uint16_t channel, id, lun; /* (virtual) device specification */ uint16_t ref_rqid; /* command abort reference */ uint8_t sc_data_direction; /* for DMA_TO_DEVICE(1) DMA_FROM_DEVICE(2) DMA_NONE(3) requests */ uint8_t nr_segments; /* Number of pieces of scatter-gather */ /* * flag in nr_segments: SG elements via grant page * * If VSCSIIF_SG_GRANT is set, the low 7 bits of nr_segments specify the number * of grant pages containing SG elements. Usable if "feature-sg-grant" set. */ #define VSCSIIF_SG_GRANT 0x80 struct scsiif_request_segment seg[VSCSIIF_SG_TABLESIZE]; uint32_t reserved[3]; }; /* Size of one response is 252 bytes */ struct vscsiif_response { uint16_t rqid; /* identifies request */ uint8_t padding; uint8_t sense_len; uint8_t sense_buffer[VSCSIIF_SENSE_BUFFERSIZE]; int32_t rslt; uint32_t residual_len; /* request bufflen - return the value from physical device */ uint32_t reserved[36]; }; DEFINE_RING_TYPES(vscsiif, struct vscsiif_request, struct vscsiif_response); #endif /*__XEN__PUBLIC_IO_SCSI_H__*/ interface/io/netif.h 0000644 00000104116 14722073410 0010374 0 ustar 00 /****************************************************************************** * xen_netif.h * * Unified network-device I/O interface for Xen guest OSes. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (c) 2003-2004, Keir Fraser */ #ifndef __XEN_PUBLIC_IO_XEN_NETIF_H__ #define __XEN_PUBLIC_IO_XEN_NETIF_H__ #include "ring.h" #include "../grant_table.h" /* * Older implementation of Xen network frontend / backend has an * implicit dependency on the MAX_SKB_FRAGS as the maximum number of * ring slots a skb can use. Netfront / netback may not work as * expected when frontend and backend have different MAX_SKB_FRAGS. * * A better approach is to add mechanism for netfront / netback to * negotiate this value. However we cannot fix all possible * frontends, so we need to define a value which states the minimum * slots backend must support. * * The minimum value derives from older Linux kernel's MAX_SKB_FRAGS * (18), which is proved to work with most frontends. Any new backend * which doesn't negotiate with frontend should expect frontend to * send a valid packet using slots up to this value. */ #define XEN_NETIF_NR_SLOTS_MIN 18 /* * Notifications after enqueuing any type of message should be conditional on * the appropriate req_event or rsp_event field in the shared ring. * If the client sends notification for rx requests then it should specify * feature 'feature-rx-notify' via xenbus. Otherwise the backend will assume * that it cannot safely queue packets (as it may not be kicked to send them). */ /* * "feature-split-event-channels" is introduced to separate guest TX * and RX notification. Backend either doesn't support this feature or * advertises it via xenstore as 0 (disabled) or 1 (enabled). * * To make use of this feature, frontend should allocate two event * channels for TX and RX, advertise them to backend as * "event-channel-tx" and "event-channel-rx" respectively. If frontend * doesn't want to use this feature, it just writes "event-channel" * node as before. */ /* * Multiple transmit and receive queues: * If supported, the backend will write the key "multi-queue-max-queues" to * the directory for that vif, and set its value to the maximum supported * number of queues. * Frontends that are aware of this feature and wish to use it can write the * key "multi-queue-num-queues", set to the number they wish to use, which * must be greater than zero, and no more than the value reported by the backend * in "multi-queue-max-queues". * * Queues replicate the shared rings and event channels. * "feature-split-event-channels" may optionally be used when using * multiple queues, but is not mandatory. * * Each queue consists of one shared ring pair, i.e. there must be the same * number of tx and rx rings. * * For frontends requesting just one queue, the usual event-channel and * ring-ref keys are written as before, simplifying the backend processing * to avoid distinguishing between a frontend that doesn't understand the * multi-queue feature, and one that does, but requested only one queue. * * Frontends requesting two or more queues must not write the toplevel * event-channel (or event-channel-{tx,rx}) and {tx,rx}-ring-ref keys, * instead writing those keys under sub-keys having the name "queue-N" where * N is the integer ID of the queue for which those keys belong. Queues * are indexed from zero. For example, a frontend with two queues and split * event channels must write the following set of queue-related keys: * * /local/domain/1/device/vif/0/multi-queue-num-queues = "2" * /local/domain/1/device/vif/0/queue-0 = "" * /local/domain/1/device/vif/0/queue-0/tx-ring-ref = "<ring-ref-tx0>" * /local/domain/1/device/vif/0/queue-0/rx-ring-ref = "<ring-ref-rx0>" * /local/domain/1/device/vif/0/queue-0/event-channel-tx = "<evtchn-tx0>" * /local/domain/1/device/vif/0/queue-0/event-channel-rx = "<evtchn-rx0>" * /local/domain/1/device/vif/0/queue-1 = "" * /local/domain/1/device/vif/0/queue-1/tx-ring-ref = "<ring-ref-tx1>" * /local/domain/1/device/vif/0/queue-1/rx-ring-ref = "<ring-ref-rx1" * /local/domain/1/device/vif/0/queue-1/event-channel-tx = "<evtchn-tx1>" * /local/domain/1/device/vif/0/queue-1/event-channel-rx = "<evtchn-rx1>" * * If there is any inconsistency in the XenStore data, the backend may * choose not to connect any queues, instead treating the request as an * error. This includes scenarios where more (or fewer) queues were * requested than the frontend provided details for. * * Mapping of packets to queues is considered to be a function of the * transmitting system (backend or frontend) and is not negotiated * between the two. Guests are free to transmit packets on any queue * they choose, provided it has been set up correctly. Guests must be * prepared to receive packets on any queue they have requested be set up. */ /* * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum * offload off or on. If it is missing then the feature is assumed to be on. * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum * offload on or off. If it is missing then the feature is assumed to be off. */ /* * "feature-gso-tcpv4" and "feature-gso-tcpv6" advertise the capability to * handle large TCP packets (in IPv4 or IPv6 form respectively). Neither * frontends nor backends are assumed to be capable unless the flags are * present. */ /* * "feature-multicast-control" and "feature-dynamic-multicast-control" * advertise the capability to filter ethernet multicast packets in the * backend. If the frontend wishes to take advantage of this feature then * it may set "request-multicast-control". If the backend only advertises * "feature-multicast-control" then "request-multicast-control" must be set * before the frontend moves into the connected state. The backend will * sample the value on this state transition and any subsequent change in * value will have no effect. However, if the backend also advertises * "feature-dynamic-multicast-control" then "request-multicast-control" * may be set by the frontend at any time. In this case, the backend will * watch the value and re-sample on watch events. * * If the sampled value of "request-multicast-control" is set then the * backend transmit side should no longer flood multicast packets to the * frontend, it should instead drop any multicast packet that does not * match in a filter list. * The list is amended by the frontend by sending dummy transmit requests * containing XEN_NETIF_EXTRA_TYPE_MCAST_{ADD,DEL} extra-info fragments as * specified below. * Note that the filter list may be amended even if the sampled value of * "request-multicast-control" is not set, however the filter should only * be applied if it is set. */ /* * Control ring * ============ * * Some features, such as hashing (detailed below), require a * significant amount of out-of-band data to be passed from frontend to * backend. Use of xenstore is not suitable for large quantities of data * because of quota limitations and so a dedicated 'control ring' is used. * The ability of the backend to use a control ring is advertised by * setting: * * /local/domain/X/backend/<domid>/<vif>/feature-ctrl-ring = "1" * * The frontend provides a control ring to the backend by setting: * * /local/domain/<domid>/device/vif/<vif>/ctrl-ring-ref = <gref> * /local/domain/<domid>/device/vif/<vif>/event-channel-ctrl = <port> * * where <gref> is the grant reference of the shared page used to * implement the control ring and <port> is an event channel to be used * as a mailbox interrupt. These keys must be set before the frontend * moves into the connected state. * * The control ring uses a fixed request/response message size and is * balanced (i.e. one request to one response), so operationally it is much * the same as a transmit or receive ring. * Note that there is no requirement that responses are issued in the same * order as requests. */ /* * Hash types * ========== * * For the purposes of the definitions below, 'Packet[]' is an array of * octets containing an IP packet without options, 'Array[X..Y]' means a * sub-array of 'Array' containing bytes X thru Y inclusive, and '+' is * used to indicate concatenation of arrays. */ /* * A hash calculated over an IP version 4 header as follows: * * Buffer[0..8] = Packet[12..15] (source address) + * Packet[16..19] (destination address) * * Result = Hash(Buffer, 8) */ #define _XEN_NETIF_CTRL_HASH_TYPE_IPV4 0 #define XEN_NETIF_CTRL_HASH_TYPE_IPV4 \ (1 << _XEN_NETIF_CTRL_HASH_TYPE_IPV4) /* * A hash calculated over an IP version 4 header and TCP header as * follows: * * Buffer[0..12] = Packet[12..15] (source address) + * Packet[16..19] (destination address) + * Packet[20..21] (source port) + * Packet[22..23] (destination port) * * Result = Hash(Buffer, 12) */ #define _XEN_NETIF_CTRL_HASH_TYPE_IPV4_TCP 1 #define XEN_NETIF_CTRL_HASH_TYPE_IPV4_TCP \ (1 << _XEN_NETIF_CTRL_HASH_TYPE_IPV4_TCP) /* * A hash calculated over an IP version 6 header as follows: * * Buffer[0..32] = Packet[8..23] (source address ) + * Packet[24..39] (destination address) * * Result = Hash(Buffer, 32) */ #define _XEN_NETIF_CTRL_HASH_TYPE_IPV6 2 #define XEN_NETIF_CTRL_HASH_TYPE_IPV6 \ (1 << _XEN_NETIF_CTRL_HASH_TYPE_IPV6) /* * A hash calculated over an IP version 6 header and TCP header as * follows: * * Buffer[0..36] = Packet[8..23] (source address) + * Packet[24..39] (destination address) + * Packet[40..41] (source port) + * Packet[42..43] (destination port) * * Result = Hash(Buffer, 36) */ #define _XEN_NETIF_CTRL_HASH_TYPE_IPV6_TCP 3 #define XEN_NETIF_CTRL_HASH_TYPE_IPV6_TCP \ (1 << _XEN_NETIF_CTRL_HASH_TYPE_IPV6_TCP) /* * Hash algorithms * =============== */ #define XEN_NETIF_CTRL_HASH_ALGORITHM_NONE 0 /* * Toeplitz hash: */ #define XEN_NETIF_CTRL_HASH_ALGORITHM_TOEPLITZ 1 /* * This algorithm uses a 'key' as well as the data buffer itself. * (Buffer[] and Key[] are treated as shift-registers where the MSB of * Buffer/Key[0] is considered 'left-most' and the LSB of Buffer/Key[N-1] * is the 'right-most'). * * Value = 0 * For number of bits in Buffer[] * If (left-most bit of Buffer[] is 1) * Value ^= left-most 32 bits of Key[] * Key[] << 1 * Buffer[] << 1 * * The code below is provided for convenience where an operating system * does not already provide an implementation. */ #ifdef XEN_NETIF_DEFINE_TOEPLITZ static uint32_t xen_netif_toeplitz_hash(const uint8_t *key, unsigned int keylen, const uint8_t *buf, unsigned int buflen) { unsigned int keyi, bufi; uint64_t prefix = 0; uint64_t hash = 0; /* Pre-load prefix with the first 8 bytes of the key */ for (keyi = 0; keyi < 8; keyi++) { prefix <<= 8; prefix |= (keyi < keylen) ? key[keyi] : 0; } for (bufi = 0; bufi < buflen; bufi++) { uint8_t byte = buf[bufi]; unsigned int bit; for (bit = 0; bit < 8; bit++) { if (byte & 0x80) hash ^= prefix; prefix <<= 1; byte <<= 1; } /* * 'prefix' has now been left-shifted by 8, so * OR in the next byte. */ prefix |= (keyi < keylen) ? key[keyi] : 0; keyi++; } /* The valid part of the hash is in the upper 32 bits. */ return hash >> 32; } #endif /* XEN_NETIF_DEFINE_TOEPLITZ */ /* * Control requests (struct xen_netif_ctrl_request) * ================================================ * * All requests have the following format: * * 0 1 2 3 4 5 6 7 octet * +-----+-----+-----+-----+-----+-----+-----+-----+ * | id | type | data[0] | * +-----+-----+-----+-----+-----+-----+-----+-----+ * | data[1] | data[2] | * +-----+-----+-----+-----+-----------------------+ * * id: the request identifier, echoed in response. * type: the type of request (see below) * data[]: any data associated with the request (determined by type) */ struct xen_netif_ctrl_request { uint16_t id; uint16_t type; #define XEN_NETIF_CTRL_TYPE_INVALID 0 #define XEN_NETIF_CTRL_TYPE_GET_HASH_FLAGS 1 #define XEN_NETIF_CTRL_TYPE_SET_HASH_FLAGS 2 #define XEN_NETIF_CTRL_TYPE_SET_HASH_KEY 3 #define XEN_NETIF_CTRL_TYPE_GET_HASH_MAPPING_SIZE 4 #define XEN_NETIF_CTRL_TYPE_SET_HASH_MAPPING_SIZE 5 #define XEN_NETIF_CTRL_TYPE_SET_HASH_MAPPING 6 #define XEN_NETIF_CTRL_TYPE_SET_HASH_ALGORITHM 7 uint32_t data[3]; }; /* * Control responses (struct xen_netif_ctrl_response) * ================================================== * * All responses have the following format: * * 0 1 2 3 4 5 6 7 octet * +-----+-----+-----+-----+-----+-----+-----+-----+ * | id | type | status | * +-----+-----+-----+-----+-----+-----+-----+-----+ * | data | * +-----+-----+-----+-----+ * * id: the corresponding request identifier * type: the type of the corresponding request * status: the status of request processing * data: any data associated with the response (determined by type and * status) */ struct xen_netif_ctrl_response { uint16_t id; uint16_t type; uint32_t status; #define XEN_NETIF_CTRL_STATUS_SUCCESS 0 #define XEN_NETIF_CTRL_STATUS_NOT_SUPPORTED 1 #define XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER 2 #define XEN_NETIF_CTRL_STATUS_BUFFER_OVERFLOW 3 uint32_t data; }; /* * Control messages * ================ * * XEN_NETIF_CTRL_TYPE_SET_HASH_ALGORITHM * -------------------------------------- * * This is sent by the frontend to set the desired hash algorithm. * * Request: * * type = XEN_NETIF_CTRL_TYPE_SET_HASH_ALGORITHM * data[0] = a XEN_NETIF_CTRL_HASH_ALGORITHM_* value * data[1] = 0 * data[2] = 0 * * Response: * * status = XEN_NETIF_CTRL_STATUS_NOT_SUPPORTED - Operation not * supported * XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER - The algorithm is not * supported * XEN_NETIF_CTRL_STATUS_SUCCESS - Operation successful * * NOTE: Setting data[0] to XEN_NETIF_CTRL_HASH_ALGORITHM_NONE disables * hashing and the backend is free to choose how it steers packets * to queues (which is the default behaviour). * * XEN_NETIF_CTRL_TYPE_GET_HASH_FLAGS * ---------------------------------- * * This is sent by the frontend to query the types of hash supported by * the backend. * * Request: * * type = XEN_NETIF_CTRL_TYPE_GET_HASH_FLAGS * data[0] = 0 * data[1] = 0 * data[2] = 0 * * Response: * * status = XEN_NETIF_CTRL_STATUS_NOT_SUPPORTED - Operation not supported * XEN_NETIF_CTRL_STATUS_SUCCESS - Operation successful * data = supported hash types (if operation was successful) * * NOTE: A valid hash algorithm must be selected before this operation can * succeed. * * XEN_NETIF_CTRL_TYPE_SET_HASH_FLAGS * ---------------------------------- * * This is sent by the frontend to set the types of hash that the backend * should calculate. (See above for hash type definitions). * Note that the 'maximal' type of hash should always be chosen. For * example, if the frontend sets both IPV4 and IPV4_TCP hash types then * the latter hash type should be calculated for any TCP packet and the * former only calculated for non-TCP packets. * * Request: * * type = XEN_NETIF_CTRL_TYPE_SET_HASH_FLAGS * data[0] = bitwise OR of XEN_NETIF_CTRL_HASH_TYPE_* values * data[1] = 0 * data[2] = 0 * * Response: * * status = XEN_NETIF_CTRL_STATUS_NOT_SUPPORTED - Operation not * supported * XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER - One or more flag * value is invalid or * unsupported * XEN_NETIF_CTRL_STATUS_SUCCESS - Operation successful * data = 0 * * NOTE: A valid hash algorithm must be selected before this operation can * succeed. * Also, setting data[0] to zero disables hashing and the backend * is free to choose how it steers packets to queues. * * XEN_NETIF_CTRL_TYPE_SET_HASH_KEY * -------------------------------- * * This is sent by the frontend to set the key of the hash if the algorithm * requires it. (See hash algorithms above). * * Request: * * type = XEN_NETIF_CTRL_TYPE_SET_HASH_KEY * data[0] = grant reference of page containing the key (assumed to * start at beginning of grant) * data[1] = size of key in octets * data[2] = 0 * * Response: * * status = XEN_NETIF_CTRL_STATUS_NOT_SUPPORTED - Operation not * supported * XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER - Key size is invalid * XEN_NETIF_CTRL_STATUS_BUFFER_OVERFLOW - Key size is larger * than the backend * supports * XEN_NETIF_CTRL_STATUS_SUCCESS - Operation successful * data = 0 * * NOTE: Any key octets not specified are assumed to be zero (the key * is assumed to be empty by default) and specifying a new key * invalidates any previous key, hence specifying a key size of * zero will clear the key (which ensures that the calculated hash * will always be zero). * The maximum size of key is algorithm and backend specific, but * is also limited by the single grant reference. * The grant reference may be read-only and must remain valid until * the response has been processed. * * XEN_NETIF_CTRL_TYPE_GET_HASH_MAPPING_SIZE * ----------------------------------------- * * This is sent by the frontend to query the maximum size of mapping * table supported by the backend. The size is specified in terms of * table entries. * * Request: * * type = XEN_NETIF_CTRL_TYPE_GET_HASH_MAPPING_SIZE * data[0] = 0 * data[1] = 0 * data[2] = 0 * * Response: * * status = XEN_NETIF_CTRL_STATUS_NOT_SUPPORTED - Operation not supported * XEN_NETIF_CTRL_STATUS_SUCCESS - Operation successful * data = maximum number of entries allowed in the mapping table * (if operation was successful) or zero if a mapping table is * not supported (i.e. hash mapping is done only by modular * arithmetic). * * XEN_NETIF_CTRL_TYPE_SET_HASH_MAPPING_SIZE * ------------------------------------- * * This is sent by the frontend to set the actual size of the mapping * table to be used by the backend. The size is specified in terms of * table entries. * Any previous table is invalidated by this message and any new table * is assumed to be zero filled. * * Request: * * type = XEN_NETIF_CTRL_TYPE_SET_HASH_MAPPING_SIZE * data[0] = number of entries in mapping table * data[1] = 0 * data[2] = 0 * * Response: * * status = XEN_NETIF_CTRL_STATUS_NOT_SUPPORTED - Operation not * supported * XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER - Table size is invalid * XEN_NETIF_CTRL_STATUS_SUCCESS - Operation successful * data = 0 * * NOTE: Setting data[0] to 0 means that hash mapping should be done * using modular arithmetic. * * XEN_NETIF_CTRL_TYPE_SET_HASH_MAPPING * ------------------------------------ * * This is sent by the frontend to set the content of the table mapping * hash value to queue number. The backend should calculate the hash from * the packet header, use it as an index into the table (modulo the size * of the table) and then steer the packet to the queue number found at * that index. * * Request: * * type = XEN_NETIF_CTRL_TYPE_SET_HASH_MAPPING * data[0] = grant reference of page containing the mapping (sub-)table * (assumed to start at beginning of grant) * data[1] = size of (sub-)table in entries * data[2] = offset, in entries, of sub-table within overall table * * Response: * * status = XEN_NETIF_CTRL_STATUS_NOT_SUPPORTED - Operation not * supported * XEN_NETIF_CTRL_STATUS_INVALID_PARAMETER - Table size or content * is invalid * XEN_NETIF_CTRL_STATUS_BUFFER_OVERFLOW - Table size is larger * than the backend * supports * XEN_NETIF_CTRL_STATUS_SUCCESS - Operation successful * data = 0 * * NOTE: The overall table has the following format: * * 0 1 2 3 4 5 6 7 octet * +-----+-----+-----+-----+-----+-----+-----+-----+ * | mapping[0] | mapping[1] | * +-----+-----+-----+-----+-----+-----+-----+-----+ * | . | * | . | * | . | * +-----+-----+-----+-----+-----+-----+-----+-----+ * | mapping[N-2] | mapping[N-1] | * +-----+-----+-----+-----+-----+-----+-----+-----+ * * where N is specified by a XEN_NETIF_CTRL_TYPE_SET_HASH_MAPPING_SIZE * message and each mapping must specifies a queue between 0 and * "multi-queue-num-queues" (see above). * The backend may support a mapping table larger than can be * mapped by a single grant reference. Thus sub-tables within a * larger table can be individually set by sending multiple messages * with differing offset values. Specifying a new sub-table does not * invalidate any table data outside that range. * The grant reference may be read-only and must remain valid until * the response has been processed. */ DEFINE_RING_TYPES(xen_netif_ctrl, struct xen_netif_ctrl_request, struct xen_netif_ctrl_response); /* * Guest transmit * ============== * * This is the 'wire' format for transmit (frontend -> backend) packets: * * Fragment 1: xen_netif_tx_request_t - flags = XEN_NETTXF_* * size = total packet size * [Extra 1: xen_netif_extra_info_t] - (only if fragment 1 flags include * XEN_NETTXF_extra_info) * ... * [Extra N: xen_netif_extra_info_t] - (only if extra N-1 flags include * XEN_NETIF_EXTRA_MORE) * ... * Fragment N: xen_netif_tx_request_t - (only if fragment N-1 flags include * XEN_NETTXF_more_data - flags on preceding * extras are not relevant here) * flags = 0 * size = fragment size * * NOTE: * * This format slightly is different from that used for receive * (backend -> frontend) packets. Specifically, in a multi-fragment * packet the actual size of fragment 1 can only be determined by * subtracting the sizes of fragments 2..N from the total packet size. * * Ring slot size is 12 octets, however not all request/response * structs use the full size. * * tx request data (xen_netif_tx_request_t) * ------------------------------------ * * 0 1 2 3 4 5 6 7 octet * +-----+-----+-----+-----+-----+-----+-----+-----+ * | grant ref | offset | flags | * +-----+-----+-----+-----+-----+-----+-----+-----+ * | id | size | * +-----+-----+-----+-----+ * * grant ref: Reference to buffer page. * offset: Offset within buffer page. * flags: XEN_NETTXF_*. * id: request identifier, echoed in response. * size: packet size in bytes. * * tx response (xen_netif_tx_response_t) * --------------------------------- * * 0 1 2 3 4 5 6 7 octet * +-----+-----+-----+-----+-----+-----+-----+-----+ * | id | status | unused | * +-----+-----+-----+-----+-----+-----+-----+-----+ * | unused | * +-----+-----+-----+-----+ * * id: reflects id in transmit request * status: XEN_NETIF_RSP_* * * Guest receive * ============= * * This is the 'wire' format for receive (backend -> frontend) packets: * * Fragment 1: xen_netif_rx_request_t - flags = XEN_NETRXF_* * size = fragment size * [Extra 1: xen_netif_extra_info_t] - (only if fragment 1 flags include * XEN_NETRXF_extra_info) * ... * [Extra N: xen_netif_extra_info_t] - (only if extra N-1 flags include * XEN_NETIF_EXTRA_MORE) * ... * Fragment N: xen_netif_rx_request_t - (only if fragment N-1 flags include * XEN_NETRXF_more_data - flags on preceding * extras are not relevant here) * flags = 0 * size = fragment size * * NOTE: * * This format slightly is different from that used for transmit * (frontend -> backend) packets. Specifically, in a multi-fragment * packet the size of the packet can only be determined by summing the * sizes of fragments 1..N. * * Ring slot size is 8 octets. * * rx request (xen_netif_rx_request_t) * ------------------------------- * * 0 1 2 3 4 5 6 7 octet * +-----+-----+-----+-----+-----+-----+-----+-----+ * | id | pad | gref | * +-----+-----+-----+-----+-----+-----+-----+-----+ * * id: request identifier, echoed in response. * gref: reference to incoming granted frame. * * rx response (xen_netif_rx_response_t) * --------------------------------- * * 0 1 2 3 4 5 6 7 octet * +-----+-----+-----+-----+-----+-----+-----+-----+ * | id | offset | flags | status | * +-----+-----+-----+-----+-----+-----+-----+-----+ * * id: reflects id in receive request * offset: offset in page of start of received packet * flags: XEN_NETRXF_* * status: -ve: XEN_NETIF_RSP_*; +ve: Rx'ed pkt size. * * NOTE: Historically, to support GSO on the frontend receive side, Linux * netfront does not make use of the rx response id (because, as * described below, extra info structures overlay the id field). * Instead it assumes that responses always appear in the same ring * slot as their corresponding request. Thus, to maintain * compatibility, backends must make sure this is the case. * * Extra Info * ========== * * Can be present if initial request or response has NET{T,R}XF_extra_info, * or previous extra request has XEN_NETIF_EXTRA_MORE. * * The struct therefore needs to fit into either a tx or rx slot and * is therefore limited to 8 octets. * * NOTE: Because extra info data overlays the usual request/response * structures, there is no id information in the opposite direction. * So, if an extra info overlays an rx response the frontend can * assume that it is in the same ring slot as the request that was * consumed to make the slot available, and the backend must ensure * this assumption is true. * * extra info (xen_netif_extra_info_t) * ------------------------------- * * General format: * * 0 1 2 3 4 5 6 7 octet * +-----+-----+-----+-----+-----+-----+-----+-----+ * |type |flags| type specific data | * +-----+-----+-----+-----+-----+-----+-----+-----+ * | padding for tx | * +-----+-----+-----+-----+ * * type: XEN_NETIF_EXTRA_TYPE_* * flags: XEN_NETIF_EXTRA_FLAG_* * padding for tx: present only in the tx case due to 8 octet limit * from rx case. Not shown in type specific entries * below. * * XEN_NETIF_EXTRA_TYPE_GSO: * * 0 1 2 3 4 5 6 7 octet * +-----+-----+-----+-----+-----+-----+-----+-----+ * |type |flags| size |type | pad | features | * +-----+-----+-----+-----+-----+-----+-----+-----+ * * type: Must be XEN_NETIF_EXTRA_TYPE_GSO * flags: XEN_NETIF_EXTRA_FLAG_* * size: Maximum payload size of each segment. For example, * for TCP this is just the path MSS. * type: XEN_NETIF_GSO_TYPE_*: This determines the protocol of * the packet and any extra features required to segment the * packet properly. * features: EN_XEN_NETIF_GSO_FEAT_*: This specifies any extra GSO * features required to process this packet, such as ECN * support for TCPv4. * * XEN_NETIF_EXTRA_TYPE_MCAST_{ADD,DEL}: * * 0 1 2 3 4 5 6 7 octet * +-----+-----+-----+-----+-----+-----+-----+-----+ * |type |flags| addr | * +-----+-----+-----+-----+-----+-----+-----+-----+ * * type: Must be XEN_NETIF_EXTRA_TYPE_MCAST_{ADD,DEL} * flags: XEN_NETIF_EXTRA_FLAG_* * addr: address to add/remove * * XEN_NETIF_EXTRA_TYPE_HASH: * * A backend that supports teoplitz hashing is assumed to accept * this type of extra info in transmit packets. * A frontend that enables hashing is assumed to accept * this type of extra info in receive packets. * * 0 1 2 3 4 5 6 7 octet * +-----+-----+-----+-----+-----+-----+-----+-----+ * |type |flags|htype| alg |LSB ---- value ---- MSB| * +-----+-----+-----+-----+-----+-----+-----+-----+ * * type: Must be XEN_NETIF_EXTRA_TYPE_HASH * flags: XEN_NETIF_EXTRA_FLAG_* * htype: Hash type (one of _XEN_NETIF_CTRL_HASH_TYPE_* - see above) * alg: The algorithm used to calculate the hash (one of * XEN_NETIF_CTRL_HASH_TYPE_ALGORITHM_* - see above) * value: Hash value */ /* Protocol checksum field is blank in the packet (hardware offload)? */ #define _XEN_NETTXF_csum_blank (0) #define XEN_NETTXF_csum_blank (1U<<_XEN_NETTXF_csum_blank) /* Packet data has been validated against protocol checksum. */ #define _XEN_NETTXF_data_validated (1) #define XEN_NETTXF_data_validated (1U<<_XEN_NETTXF_data_validated) /* Packet continues in the next request descriptor. */ #define _XEN_NETTXF_more_data (2) #define XEN_NETTXF_more_data (1U<<_XEN_NETTXF_more_data) /* Packet to be followed by extra descriptor(s). */ #define _XEN_NETTXF_extra_info (3) #define XEN_NETTXF_extra_info (1U<<_XEN_NETTXF_extra_info) #define XEN_NETIF_MAX_TX_SIZE 0xFFFF struct xen_netif_tx_request { grant_ref_t gref; uint16_t offset; uint16_t flags; uint16_t id; uint16_t size; }; /* Types of xen_netif_extra_info descriptors. */ #define XEN_NETIF_EXTRA_TYPE_NONE (0) /* Never used - invalid */ #define XEN_NETIF_EXTRA_TYPE_GSO (1) /* u.gso */ #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2) /* u.mcast */ #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3) /* u.mcast */ #define XEN_NETIF_EXTRA_TYPE_HASH (4) /* u.hash */ #define XEN_NETIF_EXTRA_TYPE_MAX (5) /* xen_netif_extra_info_t flags. */ #define _XEN_NETIF_EXTRA_FLAG_MORE (0) #define XEN_NETIF_EXTRA_FLAG_MORE (1U<<_XEN_NETIF_EXTRA_FLAG_MORE) /* GSO types */ #define XEN_NETIF_GSO_TYPE_NONE (0) #define XEN_NETIF_GSO_TYPE_TCPV4 (1) #define XEN_NETIF_GSO_TYPE_TCPV6 (2) /* * This structure needs to fit within both xen_netif_tx_request_t and * xen_netif_rx_response_t for compatibility. */ struct xen_netif_extra_info { uint8_t type; uint8_t flags; union { struct { uint16_t size; uint8_t type; uint8_t pad; uint16_t features; } gso; struct { uint8_t addr[6]; } mcast; struct { uint8_t type; uint8_t algorithm; uint8_t value[4]; } hash; uint16_t pad[3]; } u; }; struct xen_netif_tx_response { uint16_t id; int16_t status; }; struct xen_netif_rx_request { uint16_t id; /* Echoed in response message. */ uint16_t pad; grant_ref_t gref; }; /* Packet data has been validated against protocol checksum. */ #define _XEN_NETRXF_data_validated (0) #define XEN_NETRXF_data_validated (1U<<_XEN_NETRXF_data_validated) /* Protocol checksum field is blank in the packet (hardware offload)? */ #define _XEN_NETRXF_csum_blank (1) #define XEN_NETRXF_csum_blank (1U<<_XEN_NETRXF_csum_blank) /* Packet continues in the next request descriptor. */ #define _XEN_NETRXF_more_data (2) #define XEN_NETRXF_more_data (1U<<_XEN_NETRXF_more_data) /* Packet to be followed by extra descriptor(s). */ #define _XEN_NETRXF_extra_info (3) #define XEN_NETRXF_extra_info (1U<<_XEN_NETRXF_extra_info) /* Packet has GSO prefix. Deprecated but included for compatibility */ #define _XEN_NETRXF_gso_prefix (4) #define XEN_NETRXF_gso_prefix (1U<<_XEN_NETRXF_gso_prefix) struct xen_netif_rx_response { uint16_t id; uint16_t offset; uint16_t flags; int16_t status; }; /* * Generate xen_netif ring structures and types. */ DEFINE_RING_TYPES(xen_netif_tx, struct xen_netif_tx_request, struct xen_netif_tx_response); DEFINE_RING_TYPES(xen_netif_rx, struct xen_netif_rx_request, struct xen_netif_rx_response); #define XEN_NETIF_RSP_DROPPED -2 #define XEN_NETIF_RSP_ERROR -1 #define XEN_NETIF_RSP_OKAY 0 /* No response: used for auxiliary requests (e.g., xen_netif_extra_info_t). */ #define XEN_NETIF_RSP_NULL 1 #endif interface/io/xs_wire.h 0000644 00000004327 14722073410 0010752 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /* * Details of the "wire" protocol between Xen Store Daemon and client * library or guest kernel. * Copyright (C) 2005 Rusty Russell IBM Corporation */ #ifndef _XS_WIRE_H #define _XS_WIRE_H enum xsd_sockmsg_type { XS_DEBUG, XS_DIRECTORY, XS_READ, XS_GET_PERMS, XS_WATCH, XS_UNWATCH, XS_TRANSACTION_START, XS_TRANSACTION_END, XS_INTRODUCE, XS_RELEASE, XS_GET_DOMAIN_PATH, XS_WRITE, XS_MKDIR, XS_RM, XS_SET_PERMS, XS_WATCH_EVENT, XS_ERROR, XS_IS_DOMAIN_INTRODUCED, XS_RESUME, XS_SET_TARGET, XS_RESTRICT, XS_RESET_WATCHES, }; #define XS_WRITE_NONE "NONE" #define XS_WRITE_CREATE "CREATE" #define XS_WRITE_CREATE_EXCL "CREATE|EXCL" /* We hand errors as strings, for portability. */ struct xsd_errors { int errnum; const char *errstring; }; #define XSD_ERROR(x) { x, #x } static struct xsd_errors xsd_errors[] __attribute__((unused)) = { XSD_ERROR(EINVAL), XSD_ERROR(EACCES), XSD_ERROR(EEXIST), XSD_ERROR(EISDIR), XSD_ERROR(ENOENT), XSD_ERROR(ENOMEM), XSD_ERROR(ENOSPC), XSD_ERROR(EIO), XSD_ERROR(ENOTEMPTY), XSD_ERROR(ENOSYS), XSD_ERROR(EROFS), XSD_ERROR(EBUSY), XSD_ERROR(EAGAIN), XSD_ERROR(EISCONN) }; struct xsd_sockmsg { uint32_t type; /* XS_??? */ uint32_t req_id;/* Request identifier, echoed in daemon's response. */ uint32_t tx_id; /* Transaction id (0 if not related to a transaction). */ uint32_t len; /* Length of data following this. */ /* Generally followed by nul-terminated string(s). */ }; enum xs_watch_type { XS_WATCH_PATH = 0, XS_WATCH_TOKEN }; /* Inter-domain shared memory communications. */ #define XENSTORE_RING_SIZE 1024 typedef uint32_t XENSTORE_RING_IDX; #define MASK_XENSTORE_IDX(idx) ((idx) & (XENSTORE_RING_SIZE-1)) struct xenstore_domain_interface { char req[XENSTORE_RING_SIZE]; /* Requests to xenstore daemon. */ char rsp[XENSTORE_RING_SIZE]; /* Replies and async watch events. */ XENSTORE_RING_IDX req_cons, req_prod; XENSTORE_RING_IDX rsp_cons, rsp_prod; }; /* Violating this is very bad. See docs/misc/xenstore.txt. */ #define XENSTORE_PAYLOAD_MAX 4096 #endif /* _XS_WIRE_H */ interface/io/9pfs.h 0000644 00000002672 14722073410 0010154 0 ustar 00 /* * 9pfs.h -- Xen 9PFS transport * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (C) 2017 Stefano Stabellini <stefano@aporeto.com> */ #ifndef __XEN_PUBLIC_IO_9PFS_H__ #define __XEN_PUBLIC_IO_9PFS_H__ #include "xen/interface/io/ring.h" /* * See docs/misc/9pfs.markdown in xen.git for the full specification: * https://xenbits.xen.org/docs/unstable/misc/9pfs.html */ DEFINE_XEN_FLEX_RING_AND_INTF(xen_9pfs); #endif interface/io/pvcalls.h 0000644 00000006051 14722073410 0010732 0 ustar 00 #ifndef __XEN_PUBLIC_IO_XEN_PVCALLS_H__ #define __XEN_PUBLIC_IO_XEN_PVCALLS_H__ #include <linux/net.h> #include <xen/interface/io/ring.h> #include <xen/interface/grant_table.h> /* "1" means socket, connect, release, bind, listen, accept and poll */ #define XENBUS_FUNCTIONS_CALLS "1" /* * See docs/misc/pvcalls.markdown in xen.git for the full specification: * https://xenbits.xen.org/docs/unstable/misc/pvcalls.html */ struct pvcalls_data_intf { RING_IDX in_cons, in_prod, in_error; uint8_t pad1[52]; RING_IDX out_cons, out_prod, out_error; uint8_t pad2[52]; RING_IDX ring_order; grant_ref_t ref[]; }; DEFINE_XEN_FLEX_RING(pvcalls); #define PVCALLS_SOCKET 0 #define PVCALLS_CONNECT 1 #define PVCALLS_RELEASE 2 #define PVCALLS_BIND 3 #define PVCALLS_LISTEN 4 #define PVCALLS_ACCEPT 5 #define PVCALLS_POLL 6 struct xen_pvcalls_request { uint32_t req_id; /* private to guest, echoed in response */ uint32_t cmd; /* command to execute */ union { struct xen_pvcalls_socket { uint64_t id; uint32_t domain; uint32_t type; uint32_t protocol; } socket; struct xen_pvcalls_connect { uint64_t id; uint8_t addr[28]; uint32_t len; uint32_t flags; grant_ref_t ref; uint32_t evtchn; } connect; struct xen_pvcalls_release { uint64_t id; uint8_t reuse; } release; struct xen_pvcalls_bind { uint64_t id; uint8_t addr[28]; uint32_t len; } bind; struct xen_pvcalls_listen { uint64_t id; uint32_t backlog; } listen; struct xen_pvcalls_accept { uint64_t id; uint64_t id_new; grant_ref_t ref; uint32_t evtchn; } accept; struct xen_pvcalls_poll { uint64_t id; } poll; /* dummy member to force sizeof(struct xen_pvcalls_request) * to match across archs */ struct xen_pvcalls_dummy { uint8_t dummy[56]; } dummy; } u; }; struct xen_pvcalls_response { uint32_t req_id; uint32_t cmd; int32_t ret; uint32_t pad; union { struct _xen_pvcalls_socket { uint64_t id; } socket; struct _xen_pvcalls_connect { uint64_t id; } connect; struct _xen_pvcalls_release { uint64_t id; } release; struct _xen_pvcalls_bind { uint64_t id; } bind; struct _xen_pvcalls_listen { uint64_t id; } listen; struct _xen_pvcalls_accept { uint64_t id; } accept; struct _xen_pvcalls_poll { uint64_t id; } poll; struct _xen_pvcalls_dummy { uint8_t dummy[8]; } dummy; } u; }; DEFINE_RING_TYPES(xen_pvcalls, struct xen_pvcalls_request, struct xen_pvcalls_response); #endif interface/io/blkif.h 0000644 00000031626 14722073410 0010363 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /****************************************************************************** * blkif.h * * Unified block-device I/O interface for Xen guest OSes. * * Copyright (c) 2003-2004, Keir Fraser */ #ifndef __XEN_PUBLIC_IO_BLKIF_H__ #define __XEN_PUBLIC_IO_BLKIF_H__ #include <xen/interface/io/ring.h> #include <xen/interface/grant_table.h> /* * Front->back notifications: When enqueuing a new request, sending a * notification can be made conditional on req_event (i.e., the generic * hold-off mechanism provided by the ring macros). Backends must set * req_event appropriately (e.g., using RING_FINAL_CHECK_FOR_REQUESTS()). * * Back->front notifications: When enqueuing a new response, sending a * notification can be made conditional on rsp_event (i.e., the generic * hold-off mechanism provided by the ring macros). Frontends must set * rsp_event appropriately (e.g., using RING_FINAL_CHECK_FOR_RESPONSES()). */ typedef uint16_t blkif_vdev_t; typedef uint64_t blkif_sector_t; /* * Multiple hardware queues/rings: * If supported, the backend will write the key "multi-queue-max-queues" to * the directory for that vbd, and set its value to the maximum supported * number of queues. * Frontends that are aware of this feature and wish to use it can write the * key "multi-queue-num-queues" with the number they wish to use, which must be * greater than zero, and no more than the value reported by the backend in * "multi-queue-max-queues". * * For frontends requesting just one queue, the usual event-channel and * ring-ref keys are written as before, simplifying the backend processing * to avoid distinguishing between a frontend that doesn't understand the * multi-queue feature, and one that does, but requested only one queue. * * Frontends requesting two or more queues must not write the toplevel * event-channel and ring-ref keys, instead writing those keys under sub-keys * having the name "queue-N" where N is the integer ID of the queue/ring for * which those keys belong. Queues are indexed from zero. * For example, a frontend with two queues must write the following set of * queue-related keys: * * /local/domain/1/device/vbd/0/multi-queue-num-queues = "2" * /local/domain/1/device/vbd/0/queue-0 = "" * /local/domain/1/device/vbd/0/queue-0/ring-ref = "<ring-ref#0>" * /local/domain/1/device/vbd/0/queue-0/event-channel = "<evtchn#0>" * /local/domain/1/device/vbd/0/queue-1 = "" * /local/domain/1/device/vbd/0/queue-1/ring-ref = "<ring-ref#1>" * /local/domain/1/device/vbd/0/queue-1/event-channel = "<evtchn#1>" * * It is also possible to use multiple queues/rings together with * feature multi-page ring buffer. * For example, a frontend requests two queues/rings and the size of each ring * buffer is two pages must write the following set of related keys: * * /local/domain/1/device/vbd/0/multi-queue-num-queues = "2" * /local/domain/1/device/vbd/0/ring-page-order = "1" * /local/domain/1/device/vbd/0/queue-0 = "" * /local/domain/1/device/vbd/0/queue-0/ring-ref0 = "<ring-ref#0>" * /local/domain/1/device/vbd/0/queue-0/ring-ref1 = "<ring-ref#1>" * /local/domain/1/device/vbd/0/queue-0/event-channel = "<evtchn#0>" * /local/domain/1/device/vbd/0/queue-1 = "" * /local/domain/1/device/vbd/0/queue-1/ring-ref0 = "<ring-ref#2>" * /local/domain/1/device/vbd/0/queue-1/ring-ref1 = "<ring-ref#3>" * /local/domain/1/device/vbd/0/queue-1/event-channel = "<evtchn#1>" * */ /* * REQUEST CODES. */ #define BLKIF_OP_READ 0 #define BLKIF_OP_WRITE 1 /* * Recognised only if "feature-barrier" is present in backend xenbus info. * The "feature_barrier" node contains a boolean indicating whether barrier * requests are likely to succeed or fail. Either way, a barrier request * may fail at any time with BLKIF_RSP_EOPNOTSUPP if it is unsupported by * the underlying block-device hardware. The boolean simply indicates whether * or not it is worthwhile for the frontend to attempt barrier requests. * If a backend does not recognise BLKIF_OP_WRITE_BARRIER, it should *not* * create the "feature-barrier" node! */ #define BLKIF_OP_WRITE_BARRIER 2 /* * Recognised if "feature-flush-cache" is present in backend xenbus * info. A flush will ask the underlying storage hardware to flush its * non-volatile caches as appropriate. The "feature-flush-cache" node * contains a boolean indicating whether flush requests are likely to * succeed or fail. Either way, a flush request may fail at any time * with BLKIF_RSP_EOPNOTSUPP if it is unsupported by the underlying * block-device hardware. The boolean simply indicates whether or not it * is worthwhile for the frontend to attempt flushes. If a backend does * not recognise BLKIF_OP_WRITE_FLUSH_CACHE, it should *not* create the * "feature-flush-cache" node! */ #define BLKIF_OP_FLUSH_DISKCACHE 3 /* * Recognised only if "feature-discard" is present in backend xenbus info. * The "feature-discard" node contains a boolean indicating whether trim * (ATA) or unmap (SCSI) - conviently called discard requests are likely * to succeed or fail. Either way, a discard request * may fail at any time with BLKIF_RSP_EOPNOTSUPP if it is unsupported by * the underlying block-device hardware. The boolean simply indicates whether * or not it is worthwhile for the frontend to attempt discard requests. * If a backend does not recognise BLKIF_OP_DISCARD, it should *not* * create the "feature-discard" node! * * Discard operation is a request for the underlying block device to mark * extents to be erased. However, discard does not guarantee that the blocks * will be erased from the device - it is just a hint to the device * controller that these blocks are no longer in use. What the device * controller does with that information is left to the controller. * Discard operations are passed with sector_number as the * sector index to begin discard operations at and nr_sectors as the number of * sectors to be discarded. The specified sectors should be discarded if the * underlying block device supports trim (ATA) or unmap (SCSI) operations, * or a BLKIF_RSP_EOPNOTSUPP should be returned. * More information about trim/unmap operations at: * http://t13.org/Documents/UploadedDocuments/docs2008/ * e07154r6-Data_Set_Management_Proposal_for_ATA-ACS2.doc * http://www.seagate.com/staticfiles/support/disc/manuals/ * Interface%20manuals/100293068c.pdf * The backend can optionally provide three extra XenBus attributes to * further optimize the discard functionality: * 'discard-alignment' - Devices that support discard functionality may * internally allocate space in units that are bigger than the exported * logical block size. The discard-alignment parameter indicates how many bytes * the beginning of the partition is offset from the internal allocation unit's * natural alignment. * 'discard-granularity' - Devices that support discard functionality may * internally allocate space using units that are bigger than the logical block * size. The discard-granularity parameter indicates the size of the internal * allocation unit in bytes if reported by the device. Otherwise the * discard-granularity will be set to match the device's physical block size. * 'discard-secure' - All copies of the discarded sectors (potentially created * by garbage collection) must also be erased. To use this feature, the flag * BLKIF_DISCARD_SECURE must be set in the blkif_request_trim. */ #define BLKIF_OP_DISCARD 5 /* * Recognized if "feature-max-indirect-segments" in present in the backend * xenbus info. The "feature-max-indirect-segments" node contains the maximum * number of segments allowed by the backend per request. If the node is * present, the frontend might use blkif_request_indirect structs in order to * issue requests with more than BLKIF_MAX_SEGMENTS_PER_REQUEST (11). The * maximum number of indirect segments is fixed by the backend, but the * frontend can issue requests with any number of indirect segments as long as * it's less than the number provided by the backend. The indirect_grefs field * in blkif_request_indirect should be filled by the frontend with the * grant references of the pages that are holding the indirect segments. * These pages are filled with an array of blkif_request_segment that hold the * information about the segments. The number of indirect pages to use is * determined by the number of segments an indirect request contains. Every * indirect page can contain a maximum of * (PAGE_SIZE / sizeof(struct blkif_request_segment)) segments, so to * calculate the number of indirect pages to use we have to do * ceil(indirect_segments / (PAGE_SIZE / sizeof(struct blkif_request_segment))). * * If a backend does not recognize BLKIF_OP_INDIRECT, it should *not* * create the "feature-max-indirect-segments" node! */ #define BLKIF_OP_INDIRECT 6 /* * Maximum scatter/gather segments per request. * This is carefully chosen so that sizeof(struct blkif_ring) <= PAGE_SIZE. * NB. This could be 12 if the ring indexes weren't stored in the same page. */ #define BLKIF_MAX_SEGMENTS_PER_REQUEST 11 #define BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST 8 struct blkif_request_segment { grant_ref_t gref; /* reference to I/O buffer frame */ /* @first_sect: first sector in frame to transfer (inclusive). */ /* @last_sect: last sector in frame to transfer (inclusive). */ uint8_t first_sect, last_sect; }; struct blkif_request_rw { uint8_t nr_segments; /* number of segments */ blkif_vdev_t handle; /* only for read/write requests */ #ifndef CONFIG_X86_32 uint32_t _pad1; /* offsetof(blkif_request,u.rw.id) == 8 */ #endif uint64_t id; /* private guest value, echoed in resp */ blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */ struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST]; } __attribute__((__packed__)); struct blkif_request_discard { uint8_t flag; /* BLKIF_DISCARD_SECURE or zero. */ #define BLKIF_DISCARD_SECURE (1<<0) /* ignored if discard-secure=0 */ blkif_vdev_t _pad1; /* only for read/write requests */ #ifndef CONFIG_X86_32 uint32_t _pad2; /* offsetof(blkif_req..,u.discard.id)==8*/ #endif uint64_t id; /* private guest value, echoed in resp */ blkif_sector_t sector_number; uint64_t nr_sectors; uint8_t _pad3; } __attribute__((__packed__)); struct blkif_request_other { uint8_t _pad1; blkif_vdev_t _pad2; /* only for read/write requests */ #ifndef CONFIG_X86_32 uint32_t _pad3; /* offsetof(blkif_req..,u.other.id)==8*/ #endif uint64_t id; /* private guest value, echoed in resp */ } __attribute__((__packed__)); struct blkif_request_indirect { uint8_t indirect_op; uint16_t nr_segments; #ifndef CONFIG_X86_32 uint32_t _pad1; /* offsetof(blkif_...,u.indirect.id) == 8 */ #endif uint64_t id; blkif_sector_t sector_number; blkif_vdev_t handle; uint16_t _pad2; grant_ref_t indirect_grefs[BLKIF_MAX_INDIRECT_PAGES_PER_REQUEST]; #ifndef CONFIG_X86_32 uint32_t _pad3; /* make it 64 byte aligned */ #else uint64_t _pad3; /* make it 64 byte aligned */ #endif } __attribute__((__packed__)); struct blkif_request { uint8_t operation; /* BLKIF_OP_??? */ union { struct blkif_request_rw rw; struct blkif_request_discard discard; struct blkif_request_other other; struct blkif_request_indirect indirect; } u; } __attribute__((__packed__)); struct blkif_response { uint64_t id; /* copied from request */ uint8_t operation; /* copied from request */ int16_t status; /* BLKIF_RSP_??? */ }; /* * STATUS RETURN CODES. */ /* Operation not supported (only happens on barrier writes). */ #define BLKIF_RSP_EOPNOTSUPP -2 /* Operation failed for some unspecified reason (-EIO). */ #define BLKIF_RSP_ERROR -1 /* Operation completed successfully. */ #define BLKIF_RSP_OKAY 0 /* * Generate blkif ring structures and types. */ DEFINE_RING_TYPES(blkif, struct blkif_request, struct blkif_response); #define VDISK_CDROM 0x1 #define VDISK_REMOVABLE 0x2 #define VDISK_READONLY 0x4 /* Xen-defined major numbers for virtual disks, they look strangely * familiar */ #define XEN_IDE0_MAJOR 3 #define XEN_IDE1_MAJOR 22 #define XEN_SCSI_DISK0_MAJOR 8 #define XEN_SCSI_DISK1_MAJOR 65 #define XEN_SCSI_DISK2_MAJOR 66 #define XEN_SCSI_DISK3_MAJOR 67 #define XEN_SCSI_DISK4_MAJOR 68 #define XEN_SCSI_DISK5_MAJOR 69 #define XEN_SCSI_DISK6_MAJOR 70 #define XEN_SCSI_DISK7_MAJOR 71 #define XEN_SCSI_DISK8_MAJOR 128 #define XEN_SCSI_DISK9_MAJOR 129 #define XEN_SCSI_DISK10_MAJOR 130 #define XEN_SCSI_DISK11_MAJOR 131 #define XEN_SCSI_DISK12_MAJOR 132 #define XEN_SCSI_DISK13_MAJOR 133 #define XEN_SCSI_DISK14_MAJOR 134 #define XEN_SCSI_DISK15_MAJOR 135 #endif /* __XEN_PUBLIC_IO_BLKIF_H__ */ interface/io/kbdif.h 0000644 00000057321 14722073410 0010353 0 ustar 00 /* * kbdif.h -- Xen virtual keyboard/mouse * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (C) 2005 Anthony Liguori <aliguori@us.ibm.com> * Copyright (C) 2006 Red Hat, Inc., Markus Armbruster <armbru@redhat.com> */ #ifndef __XEN_PUBLIC_IO_KBDIF_H__ #define __XEN_PUBLIC_IO_KBDIF_H__ /* ***************************************************************************** * Feature and Parameter Negotiation ***************************************************************************** * * The two halves of a para-virtual driver utilize nodes within * XenStore to communicate capabilities and to negotiate operating parameters. * This section enumerates these nodes which reside in the respective front and * backend portions of XenStore, following XenBus convention. * * All data in XenStore is stored as strings. Nodes specifying numeric * values are encoded in decimal. Integer value ranges listed below are * expressed as fixed sized integer types capable of storing the conversion * of a properly formated node string, without loss of information. * ***************************************************************************** * Backend XenBus Nodes ***************************************************************************** * *---------------------------- Features supported ---------------------------- * * Capable backend advertises supported features by publishing * corresponding entries in XenStore and puts 1 as the value of the entry. * If a feature is not supported then 0 must be set or feature entry omitted. * * feature-disable-keyboard * Values: <uint> * * If there is no need to expose a virtual keyboard device by the * frontend then this must be set to 1. * * feature-disable-pointer * Values: <uint> * * If there is no need to expose a virtual pointer device by the * frontend then this must be set to 1. * * feature-abs-pointer * Values: <uint> * * Backends, which support reporting of absolute coordinates for pointer * device should set this to 1. * * feature-multi-touch * Values: <uint> * * Backends, which support reporting of multi-touch events * should set this to 1. * * feature-raw-pointer * Values: <uint> * * Backends, which support reporting raw (unscaled) absolute coordinates * for pointer devices should set this to 1. Raw (unscaled) values have * a range of [0, 0x7fff]. * *----------------------- Device Instance Parameters ------------------------ * * unique-id * Values: <string> * * After device instance initialization it is assigned a unique ID, * so every instance of the frontend can be identified by the backend * by this ID. This can be UUID or such. * *------------------------- Pointer Device Parameters ------------------------ * * width * Values: <uint> * * Maximum X coordinate (width) to be used by the frontend * while reporting input events, pixels, [0; UINT32_MAX]. * * height * Values: <uint> * * Maximum Y coordinate (height) to be used by the frontend * while reporting input events, pixels, [0; UINT32_MAX]. * *----------------------- Multi-touch Device Parameters ---------------------- * * multi-touch-num-contacts * Values: <uint> * * Number of simultaneous touches reported. * * multi-touch-width * Values: <uint> * * Width of the touch area to be used by the frontend * while reporting input events, pixels, [0; UINT32_MAX]. * * multi-touch-height * Values: <uint> * * Height of the touch area to be used by the frontend * while reporting input events, pixels, [0; UINT32_MAX]. * ***************************************************************************** * Frontend XenBus Nodes ***************************************************************************** * *------------------------------ Feature request ----------------------------- * * Capable frontend requests features from backend via setting corresponding * entries to 1 in XenStore. Requests for features not advertised as supported * by the backend have no effect. * * request-abs-pointer * Values: <uint> * * Request backend to report absolute pointer coordinates * (XENKBD_TYPE_POS) instead of relative ones (XENKBD_TYPE_MOTION). * * request-multi-touch * Values: <uint> * * Request backend to report multi-touch events. * * request-raw-pointer * Values: <uint> * * Request backend to report raw unscaled absolute pointer coordinates. * This option is only valid if request-abs-pointer is also set. * Raw unscaled coordinates have the range [0, 0x7fff] * *----------------------- Request Transport Parameters ----------------------- * * event-channel * Values: <uint> * * The identifier of the Xen event channel used to signal activity * in the ring buffer. * * page-gref * Values: <uint> * * The Xen grant reference granting permission for the backend to map * a sole page in a single page sized event ring buffer. * * page-ref * Values: <uint> * * OBSOLETE, not recommended for use. * PFN of the shared page. */ /* * EVENT CODES. */ #define XENKBD_TYPE_MOTION 1 #define XENKBD_TYPE_RESERVED 2 #define XENKBD_TYPE_KEY 3 #define XENKBD_TYPE_POS 4 #define XENKBD_TYPE_MTOUCH 5 /* Multi-touch event sub-codes */ #define XENKBD_MT_EV_DOWN 0 #define XENKBD_MT_EV_UP 1 #define XENKBD_MT_EV_MOTION 2 #define XENKBD_MT_EV_SYN 3 #define XENKBD_MT_EV_SHAPE 4 #define XENKBD_MT_EV_ORIENT 5 /* * CONSTANTS, XENSTORE FIELD AND PATH NAME STRINGS, HELPERS. */ #define XENKBD_DRIVER_NAME "vkbd" #define XENKBD_FIELD_FEAT_DSBL_KEYBRD "feature-disable-keyboard" #define XENKBD_FIELD_FEAT_DSBL_POINTER "feature-disable-pointer" #define XENKBD_FIELD_FEAT_ABS_POINTER "feature-abs-pointer" #define XENKBD_FIELD_FEAT_RAW_POINTER "feature-raw-pointer" #define XENKBD_FIELD_FEAT_MTOUCH "feature-multi-touch" #define XENKBD_FIELD_REQ_ABS_POINTER "request-abs-pointer" #define XENKBD_FIELD_REQ_RAW_POINTER "request-raw-pointer" #define XENKBD_FIELD_REQ_MTOUCH "request-multi-touch" #define XENKBD_FIELD_RING_GREF "page-gref" #define XENKBD_FIELD_EVT_CHANNEL "event-channel" #define XENKBD_FIELD_WIDTH "width" #define XENKBD_FIELD_HEIGHT "height" #define XENKBD_FIELD_MT_WIDTH "multi-touch-width" #define XENKBD_FIELD_MT_HEIGHT "multi-touch-height" #define XENKBD_FIELD_MT_NUM_CONTACTS "multi-touch-num-contacts" #define XENKBD_FIELD_UNIQUE_ID "unique-id" /* OBSOLETE, not recommended for use */ #define XENKBD_FIELD_RING_REF "page-ref" /* ***************************************************************************** * Description of the protocol between frontend and backend driver. ***************************************************************************** * * The two halves of a Para-virtual driver communicate with * each other using a shared page and an event channel. * Shared page contains a ring with event structures. * * All reserved fields in the structures below must be 0. * ***************************************************************************** * Backend to frontend events ***************************************************************************** * * Frontends should ignore unknown in events. * All event packets have the same length (40 octets) * All event packets have common header: * * 0 octet * +-----------------+ * | type | * +-----------------+ * type - uint8_t, event code, XENKBD_TYPE_??? * * * Pointer relative movement event * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | _TYPE_MOTION | reserved | 4 * +----------------+----------------+----------------+----------------+ * | rel_x | 8 * +----------------+----------------+----------------+----------------+ * | rel_y | 12 * +----------------+----------------+----------------+----------------+ * | rel_z | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 40 * +----------------+----------------+----------------+----------------+ * * rel_x - int32_t, relative X motion * rel_y - int32_t, relative Y motion * rel_z - int32_t, relative Z motion (wheel) */ struct xenkbd_motion { uint8_t type; int32_t rel_x; int32_t rel_y; int32_t rel_z; }; /* * Key event (includes pointer buttons) * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | _TYPE_KEY | pressed | reserved | 4 * +----------------+----------------+----------------+----------------+ * | keycode | 8 * +----------------+----------------+----------------+----------------+ * | reserved | 12 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 40 * +----------------+----------------+----------------+----------------+ * * pressed - uint8_t, 1 if pressed; 0 otherwise * keycode - uint32_t, KEY_* from linux/input.h */ struct xenkbd_key { uint8_t type; uint8_t pressed; uint32_t keycode; }; /* * Pointer absolute position event * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | _TYPE_POS | reserved | 4 * +----------------+----------------+----------------+----------------+ * | abs_x | 8 * +----------------+----------------+----------------+----------------+ * | abs_y | 12 * +----------------+----------------+----------------+----------------+ * | rel_z | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 40 * +----------------+----------------+----------------+----------------+ * * abs_x - int32_t, absolute X position (in FB pixels) * abs_y - int32_t, absolute Y position (in FB pixels) * rel_z - int32_t, relative Z motion (wheel) */ struct xenkbd_position { uint8_t type; int32_t abs_x; int32_t abs_y; int32_t rel_z; }; /* * Multi-touch event and its sub-types * * All multi-touch event packets have common header: * * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | _TYPE_MTOUCH | event_type | contact_id | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * * event_type - unt8_t, multi-touch event sub-type, XENKBD_MT_EV_??? * contact_id - unt8_t, ID of the contact * * Touch interactions can consist of one or more contacts. * For each contact, a series of events is generated, starting * with a down event, followed by zero or more motion events, * and ending with an up event. Events relating to the same * contact point can be identified by the ID of the sequence: contact ID. * Contact ID may be reused after XENKBD_MT_EV_UP event and * is in the [0; XENKBD_FIELD_NUM_CONTACTS - 1] range. * * For further information please refer to documentation on Wayland [1], * Linux [2] and Windows [3] multi-touch support. * * [1] https://cgit.freedesktop.org/wayland/wayland/tree/protocol/wayland.xml * [2] https://www.kernel.org/doc/Documentation/input/multi-touch-protocol.rst * [3] https://msdn.microsoft.com/en-us/library/jj151564(v=vs.85).aspx * * * Multi-touch down event - sent when a new touch is made: touch is assigned * a unique contact ID, sent with this and consequent events related * to this touch. * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | _TYPE_MTOUCH | _MT_EV_DOWN | contact_id | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | abs_x | 12 * +----------------+----------------+----------------+----------------+ * | abs_y | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 40 * +----------------+----------------+----------------+----------------+ * * abs_x - int32_t, absolute X position, in pixels * abs_y - int32_t, absolute Y position, in pixels * * Multi-touch contact release event * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | _TYPE_MTOUCH | _MT_EV_UP | contact_id | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 40 * +----------------+----------------+----------------+----------------+ * * Multi-touch motion event * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | _TYPE_MTOUCH | _MT_EV_MOTION | contact_id | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | abs_x | 12 * +----------------+----------------+----------------+----------------+ * | abs_y | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 40 * +----------------+----------------+----------------+----------------+ * * abs_x - int32_t, absolute X position, in pixels, * abs_y - int32_t, absolute Y position, in pixels, * * Multi-touch input synchronization event - shows end of a set of events * which logically belong together. * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | _TYPE_MTOUCH | _MT_EV_SYN | contact_id | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 40 * +----------------+----------------+----------------+----------------+ * * Multi-touch shape event - touch point's shape has changed its shape. * Shape is approximated by an ellipse through the major and minor axis * lengths: major is the longer diameter of the ellipse and minor is the * shorter one. Center of the ellipse is reported via * XENKBD_MT_EV_DOWN/XENKBD_MT_EV_MOTION events. * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | _TYPE_MTOUCH | _MT_EV_SHAPE | contact_id | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | major | 12 * +----------------+----------------+----------------+----------------+ * | minor | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 40 * +----------------+----------------+----------------+----------------+ * * major - unt32_t, length of the major axis, pixels * minor - unt32_t, length of the minor axis, pixels * * Multi-touch orientation event - touch point's shape has changed * its orientation: calculated as a clockwise angle between the major axis * of the ellipse and positive Y axis in degrees, [-180; +180]. * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | _TYPE_MTOUCH | _MT_EV_ORIENT | contact_id | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | orientation | reserved | 12 * +----------------+----------------+----------------+----------------+ * | reserved | 16 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 40 * +----------------+----------------+----------------+----------------+ * * orientation - int16_t, clockwise angle of the major axis */ struct xenkbd_mtouch { uint8_t type; /* XENKBD_TYPE_MTOUCH */ uint8_t event_type; /* XENKBD_MT_EV_??? */ uint8_t contact_id; uint8_t reserved[5]; /* reserved for the future use */ union { struct { int32_t abs_x; /* absolute X position, pixels */ int32_t abs_y; /* absolute Y position, pixels */ } pos; struct { uint32_t major; /* length of the major axis, pixels */ uint32_t minor; /* length of the minor axis, pixels */ } shape; int16_t orientation; /* clockwise angle of the major axis */ } u; }; #define XENKBD_IN_EVENT_SIZE 40 union xenkbd_in_event { uint8_t type; struct xenkbd_motion motion; struct xenkbd_key key; struct xenkbd_position pos; struct xenkbd_mtouch mtouch; char pad[XENKBD_IN_EVENT_SIZE]; }; /* ***************************************************************************** * Frontend to backend events ***************************************************************************** * * Out events may be sent only when requested by backend, and receipt * of an unknown out event is an error. * No out events currently defined. * All event packets have the same length (40 octets) * All event packets have common header: * 0 octet * +-----------------+ * | type | * +-----------------+ * type - uint8_t, event code */ #define XENKBD_OUT_EVENT_SIZE 40 union xenkbd_out_event { uint8_t type; char pad[XENKBD_OUT_EVENT_SIZE]; }; /* ***************************************************************************** * Shared page ***************************************************************************** */ #define XENKBD_IN_RING_SIZE 2048 #define XENKBD_IN_RING_LEN (XENKBD_IN_RING_SIZE / XENKBD_IN_EVENT_SIZE) #define XENKBD_IN_RING_OFFS 1024 #define XENKBD_IN_RING(page) \ ((union xenkbd_in_event *)((char *)(page) + XENKBD_IN_RING_OFFS)) #define XENKBD_IN_RING_REF(page, idx) \ (XENKBD_IN_RING((page))[(idx) % XENKBD_IN_RING_LEN]) #define XENKBD_OUT_RING_SIZE 1024 #define XENKBD_OUT_RING_LEN (XENKBD_OUT_RING_SIZE / XENKBD_OUT_EVENT_SIZE) #define XENKBD_OUT_RING_OFFS (XENKBD_IN_RING_OFFS + XENKBD_IN_RING_SIZE) #define XENKBD_OUT_RING(page) \ ((union xenkbd_out_event *)((char *)(page) + XENKBD_OUT_RING_OFFS)) #define XENKBD_OUT_RING_REF(page, idx) \ (XENKBD_OUT_RING((page))[(idx) % XENKBD_OUT_RING_LEN]) struct xenkbd_page { uint32_t in_cons, in_prod; uint32_t out_cons, out_prod; }; #endif /* __XEN_PUBLIC_IO_KBDIF_H__ */ interface/io/fbif.h 0000644 00000010752 14722073410 0010177 0 ustar 00 /* * fbif.h -- Xen virtual frame buffer device * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (C) 2005 Anthony Liguori <aliguori@us.ibm.com> * Copyright (C) 2006 Red Hat, Inc., Markus Armbruster <armbru@redhat.com> */ #ifndef __XEN_PUBLIC_IO_FBIF_H__ #define __XEN_PUBLIC_IO_FBIF_H__ /* Out events (frontend -> backend) */ /* * Out events may be sent only when requested by backend, and receipt * of an unknown out event is an error. */ /* Event type 1 currently not used */ /* * Framebuffer update notification event * Capable frontend sets feature-update in xenstore. * Backend requests it by setting request-update in xenstore. */ #define XENFB_TYPE_UPDATE 2 struct xenfb_update { uint8_t type; /* XENFB_TYPE_UPDATE */ int32_t x; /* source x */ int32_t y; /* source y */ int32_t width; /* rect width */ int32_t height; /* rect height */ }; /* * Framebuffer resize notification event * Capable backend sets feature-resize in xenstore. */ #define XENFB_TYPE_RESIZE 3 struct xenfb_resize { uint8_t type; /* XENFB_TYPE_RESIZE */ int32_t width; /* width in pixels */ int32_t height; /* height in pixels */ int32_t stride; /* stride in bytes */ int32_t depth; /* depth in bits */ int32_t offset; /* start offset within framebuffer */ }; #define XENFB_OUT_EVENT_SIZE 40 union xenfb_out_event { uint8_t type; struct xenfb_update update; struct xenfb_resize resize; char pad[XENFB_OUT_EVENT_SIZE]; }; /* In events (backend -> frontend) */ /* * Frontends should ignore unknown in events. * No in events currently defined. */ #define XENFB_IN_EVENT_SIZE 40 union xenfb_in_event { uint8_t type; char pad[XENFB_IN_EVENT_SIZE]; }; /* shared page */ #define XENFB_IN_RING_SIZE 1024 #define XENFB_IN_RING_LEN (XENFB_IN_RING_SIZE / XENFB_IN_EVENT_SIZE) #define XENFB_IN_RING_OFFS 1024 #define XENFB_IN_RING(page) \ ((union xenfb_in_event *)((char *)(page) + XENFB_IN_RING_OFFS)) #define XENFB_IN_RING_REF(page, idx) \ (XENFB_IN_RING((page))[(idx) % XENFB_IN_RING_LEN]) #define XENFB_OUT_RING_SIZE 2048 #define XENFB_OUT_RING_LEN (XENFB_OUT_RING_SIZE / XENFB_OUT_EVENT_SIZE) #define XENFB_OUT_RING_OFFS (XENFB_IN_RING_OFFS + XENFB_IN_RING_SIZE) #define XENFB_OUT_RING(page) \ ((union xenfb_out_event *)((char *)(page) + XENFB_OUT_RING_OFFS)) #define XENFB_OUT_RING_REF(page, idx) \ (XENFB_OUT_RING((page))[(idx) % XENFB_OUT_RING_LEN]) struct xenfb_page { uint32_t in_cons, in_prod; uint32_t out_cons, out_prod; int32_t width; /* width of the framebuffer (in pixels) */ int32_t height; /* height of the framebuffer (in pixels) */ uint32_t line_length; /* length of a row of pixels (in bytes) */ uint32_t mem_length; /* length of the framebuffer (in bytes) */ uint8_t depth; /* depth of a pixel (in bits) */ /* * Framebuffer page directory * * Each directory page holds PAGE_SIZE / sizeof(*pd) * framebuffer pages, and can thus map up to PAGE_SIZE * * PAGE_SIZE / sizeof(*pd) bytes. With PAGE_SIZE == 4096 and * sizeof(unsigned long) == 4/8, that's 4 Megs 32 bit and 2 * Megs 64 bit. 256 directories give enough room for a 512 * Meg framebuffer with a max resolution of 12,800x10,240. * Should be enough for a while with room leftover for * expansion. */ unsigned long pd[256]; }; /* * Wart: xenkbd needs to know default resolution. Put it here until a * better solution is found, but don't leak it to the backend. */ #ifdef __KERNEL__ #define XENFB_WIDTH 800 #define XENFB_HEIGHT 600 #define XENFB_DEPTH 32 #endif #endif interface/io/sndif.h 0000644 00000140506 14722073410 0010375 0 ustar 00 /****************************************************************************** * sndif.h * * Unified sound-device I/O interface for Xen guest OSes. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (C) 2013-2015 GlobalLogic Inc. * Copyright (C) 2016-2017 EPAM Systems Inc. * * Authors: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com> * Oleksandr Grytsov <oleksandr_grytsov@epam.com> * Oleksandr Dmytryshyn <oleksandr.dmytryshyn@globallogic.com> * Iurii Konovalenko <iurii.konovalenko@globallogic.com> */ #ifndef __XEN_PUBLIC_IO_SNDIF_H__ #define __XEN_PUBLIC_IO_SNDIF_H__ #include "ring.h" #include "../grant_table.h" /* ****************************************************************************** * Protocol version ****************************************************************************** */ #define XENSND_PROTOCOL_VERSION 2 /* ****************************************************************************** * Feature and Parameter Negotiation ****************************************************************************** * * Front->back notifications: when enqueuing a new request, sending a * notification can be made conditional on xensnd_req (i.e., the generic * hold-off mechanism provided by the ring macros). Backends must set * xensnd_req appropriately (e.g., using RING_FINAL_CHECK_FOR_REQUESTS()). * * Back->front notifications: when enqueuing a new response, sending a * notification can be made conditional on xensnd_resp (i.e., the generic * hold-off mechanism provided by the ring macros). Frontends must set * xensnd_resp appropriately (e.g., using RING_FINAL_CHECK_FOR_RESPONSES()). * * The two halves of a para-virtual sound card driver utilize nodes within * XenStore to communicate capabilities and to negotiate operating parameters. * This section enumerates these nodes which reside in the respective front and * backend portions of XenStore, following the XenBus convention. * * All data in XenStore is stored as strings. Nodes specifying numeric * values are encoded in decimal. Integer value ranges listed below are * expressed as fixed sized integer types capable of storing the conversion * of a properly formated node string, without loss of information. * ****************************************************************************** * Example configuration ****************************************************************************** * * Note: depending on the use-case backend can expose more sound cards and * PCM devices/streams than the underlying HW physically has by employing * SW mixers, configuring virtual sound streams, channels etc. * * This is an example of backend and frontend configuration: * *--------------------------------- Backend ----------------------------------- * * /local/domain/0/backend/vsnd/1/0/frontend-id = "1" * /local/domain/0/backend/vsnd/1/0/frontend = "/local/domain/1/device/vsnd/0" * /local/domain/0/backend/vsnd/1/0/state = "4" * /local/domain/0/backend/vsnd/1/0/versions = "1,2" * *--------------------------------- Frontend ---------------------------------- * * /local/domain/1/device/vsnd/0/backend-id = "0" * /local/domain/1/device/vsnd/0/backend = "/local/domain/0/backend/vsnd/1/0" * /local/domain/1/device/vsnd/0/state = "4" * /local/domain/1/device/vsnd/0/version = "1" * *----------------------------- Card configuration ---------------------------- * * /local/domain/1/device/vsnd/0/short-name = "Card short name" * /local/domain/1/device/vsnd/0/long-name = "Card long name" * /local/domain/1/device/vsnd/0/sample-rates = "8000,32000,44100,48000,96000" * /local/domain/1/device/vsnd/0/sample-formats = "s8,u8,s16_le,s16_be" * /local/domain/1/device/vsnd/0/buffer-size = "262144" * *------------------------------- PCM device 0 -------------------------------- * * /local/domain/1/device/vsnd/0/0/name = "General analog" * /local/domain/1/device/vsnd/0/0/channels-max = "5" * *----------------------------- Stream 0, playback ---------------------------- * * /local/domain/1/device/vsnd/0/0/0/type = "p" * /local/domain/1/device/vsnd/0/0/0/sample-formats = "s8,u8" * /local/domain/1/device/vsnd/0/0/0/unique-id = "0" * * /local/domain/1/device/vsnd/0/0/0/ring-ref = "386" * /local/domain/1/device/vsnd/0/0/0/event-channel = "15" * /local/domain/1/device/vsnd/0/0/0/evt-ring-ref = "1386" * /local/domain/1/device/vsnd/0/0/0/evt-event-channel = "215" * *------------------------------ Stream 1, capture ---------------------------- * * /local/domain/1/device/vsnd/0/0/1/type = "c" * /local/domain/1/device/vsnd/0/0/1/channels-max = "2" * /local/domain/1/device/vsnd/0/0/1/unique-id = "1" * * /local/domain/1/device/vsnd/0/0/1/ring-ref = "384" * /local/domain/1/device/vsnd/0/0/1/event-channel = "13" * /local/domain/1/device/vsnd/0/0/1/evt-ring-ref = "1384" * /local/domain/1/device/vsnd/0/0/1/evt-event-channel = "213" * *------------------------------- PCM device 1 -------------------------------- * * /local/domain/1/device/vsnd/0/1/name = "HDMI-0" * /local/domain/1/device/vsnd/0/1/sample-rates = "8000,32000,44100" * *------------------------------ Stream 0, capture ---------------------------- * * /local/domain/1/device/vsnd/0/1/0/type = "c" * /local/domain/1/device/vsnd/0/1/0/unique-id = "2" * * /local/domain/1/device/vsnd/0/1/0/ring-ref = "387" * /local/domain/1/device/vsnd/0/1/0/event-channel = "151" * /local/domain/1/device/vsnd/0/1/0/evt-ring-ref = "1387" * /local/domain/1/device/vsnd/0/1/0/evt-event-channel = "351" * *------------------------------- PCM device 2 -------------------------------- * * /local/domain/1/device/vsnd/0/2/name = "SPDIF" * *----------------------------- Stream 0, playback ---------------------------- * * /local/domain/1/device/vsnd/0/2/0/type = "p" * /local/domain/1/device/vsnd/0/2/0/unique-id = "3" * * /local/domain/1/device/vsnd/0/2/0/ring-ref = "389" * /local/domain/1/device/vsnd/0/2/0/event-channel = "152" * /local/domain/1/device/vsnd/0/2/0/evt-ring-ref = "1389" * /local/domain/1/device/vsnd/0/2/0/evt-event-channel = "452" * ****************************************************************************** * Backend XenBus Nodes ****************************************************************************** * *----------------------------- Protocol version ------------------------------ * * versions * Values: <string> * * List of XENSND_LIST_SEPARATOR separated protocol versions supported * by the backend. For example "1,2,3". * ****************************************************************************** * Frontend XenBus Nodes ****************************************************************************** * *-------------------------------- Addressing --------------------------------- * * dom-id * Values: <uint16_t> * * Domain identifier. * * dev-id * Values: <uint16_t> * * Device identifier. * * pcm-dev-idx * Values: <uint8_t> * * Zero based contigous index of the PCM device. * * stream-idx * Values: <uint8_t> * * Zero based contigous index of the stream of the PCM device. * * The following pattern is used for addressing: * /local/domain/<dom-id>/device/vsnd/<dev-id>/<pcm-dev-idx>/<stream-idx>/... * *----------------------------- Protocol version ------------------------------ * * version * Values: <string> * * Protocol version, chosen among the ones supported by the backend. * *------------------------------- PCM settings -------------------------------- * * Every virtualized sound frontend has a set of PCM devices and streams, each * could be individually configured. Part of the PCM configuration can be * defined at higher level of the hierarchy and be fully or partially re-used * by the underlying layers. These configuration values are: * o number of channels (min/max) * o supported sample rates * o supported sample formats. * E.g. one can define these values for the whole card, device or stream. * Every underlying layer in turn can re-define some or all of them to better * fit its needs. For example, card may define number of channels to be * in [1; 8] range, and some particular stream may be limited to [1; 2] only. * The rule is that the underlying layer must be a subset of the upper layer * range. * * channels-min * Values: <uint8_t> * * The minimum amount of channels that is supported, [1; channels-max]. * Optional, if not set or omitted a value of 1 is used. * * channels-max * Values: <uint8_t> * * The maximum amount of channels that is supported. * Must be at least <channels-min>. * * sample-rates * Values: <list of uint32_t> * * List of supported sample rates separated by XENSND_LIST_SEPARATOR. * Sample rates are expressed as a list of decimal values w/o any * ordering requirement. * * sample-formats * Values: <list of XENSND_PCM_FORMAT_XXX_STR> * * List of supported sample formats separated by XENSND_LIST_SEPARATOR. * Items must not exceed XENSND_SAMPLE_FORMAT_MAX_LEN length. * * buffer-size * Values: <uint32_t> * * The maximum size in octets of the buffer to allocate per stream. * *----------------------- Virtual sound card settings ------------------------- * short-name * Values: <char[32]> * * Short name of the virtual sound card. Optional. * * long-name * Values: <char[80]> * * Long name of the virtual sound card. Optional. * *----------------------------- Device settings ------------------------------- * name * Values: <char[80]> * * Name of the sound device within the virtual sound card. Optional. * *----------------------------- Stream settings ------------------------------- * * type * Values: "p", "c" * * Stream type: "p" - playback stream, "c" - capture stream * * If both capture and playback are needed then two streams need to be * defined under the same device. * * unique-id * Values: <string> * * After stream initialization it is assigned a unique ID, so every * stream of the frontend can be identified by the backend by this ID. * This can be UUID or such. * *-------------------- Stream Request Transport Parameters -------------------- * * event-channel * Values: <uint32_t> * * The identifier of the Xen event channel used to signal activity * in the ring buffer. * * ring-ref * Values: <uint32_t> * * The Xen grant reference granting permission for the backend to map * a sole page in a single page sized ring buffer. * *--------------------- Stream Event Transport Parameters --------------------- * * This communication path is used to deliver asynchronous events from backend * to frontend, set up per stream. * * evt-event-channel * Values: <uint32_t> * * The identifier of the Xen event channel used to signal activity * in the ring buffer. * * evt-ring-ref * Values: <uint32_t> * * The Xen grant reference granting permission for the backend to map * a sole page in a single page sized ring buffer. * ****************************************************************************** * STATE DIAGRAMS ****************************************************************************** * * Tool stack creates front and back state nodes with initial state * XenbusStateInitialising. * Tool stack creates and sets up frontend sound configuration nodes per domain. * * Front Back * ================================= ===================================== * XenbusStateInitialising XenbusStateInitialising * o Query backend device identification * data. * o Open and validate backend device. * | * | * V * XenbusStateInitWait * * o Query frontend configuration * o Allocate and initialize * event channels per configured * playback/capture stream. * o Publish transport parameters * that will be in effect during * this connection. * | * | * V * XenbusStateInitialised * * o Query frontend transport parameters. * o Connect to the event channels. * | * | * V * XenbusStateConnected * * o Create and initialize OS * virtual sound device instances * as per configuration. * | * | * V * XenbusStateConnected * * XenbusStateUnknown * XenbusStateClosed * XenbusStateClosing * o Remove virtual sound device * o Remove event channels * | * | * V * XenbusStateClosed * *------------------------------- Recovery flow ------------------------------- * * In case of frontend unrecoverable errors backend handles that as * if frontend goes into the XenbusStateClosed state. * * In case of backend unrecoverable errors frontend tries removing * the virtualized device. If this is possible at the moment of error, * then frontend goes into the XenbusStateInitialising state and is ready for * new connection with backend. If the virtualized device is still in use and * cannot be removed, then frontend goes into the XenbusStateReconfiguring state * until either the virtualized device removed or backend initiates a new * connection. On the virtualized device removal frontend goes into the * XenbusStateInitialising state. * * Note on XenbusStateReconfiguring state of the frontend: if backend has * unrecoverable errors then frontend cannot send requests to the backend * and thus cannot provide functionality of the virtualized device anymore. * After backend is back to normal the virtualized device may still hold some * state: configuration in use, allocated buffers, client application state etc. * So, in most cases, this will require frontend to implement complex recovery * reconnect logic. Instead, by going into XenbusStateReconfiguring state, * frontend will make sure no new clients of the virtualized device are * accepted, allow existing client(s) to exit gracefully by signaling error * state etc. * Once all the clients are gone frontend can reinitialize the virtualized * device and get into XenbusStateInitialising state again signaling the * backend that a new connection can be made. * * There are multiple conditions possible under which frontend will go from * XenbusStateReconfiguring into XenbusStateInitialising, some of them are OS * specific. For example: * 1. The underlying OS framework may provide callbacks to signal that the last * client of the virtualized device has gone and the device can be removed * 2. Frontend can schedule a deferred work (timer/tasklet/workqueue) * to periodically check if this is the right time to re-try removal of * the virtualized device. * 3. By any other means. * ****************************************************************************** * PCM FORMATS ****************************************************************************** * * XENSND_PCM_FORMAT_<format>[_<endian>] * * format: <S/U/F><bits> or <name> * S - signed, U - unsigned, F - float * bits - 8, 16, 24, 32 * name - MU_LAW, GSM, etc. * * endian: <LE/BE>, may be absent * LE - Little endian, BE - Big endian */ #define XENSND_PCM_FORMAT_S8 0 #define XENSND_PCM_FORMAT_U8 1 #define XENSND_PCM_FORMAT_S16_LE 2 #define XENSND_PCM_FORMAT_S16_BE 3 #define XENSND_PCM_FORMAT_U16_LE 4 #define XENSND_PCM_FORMAT_U16_BE 5 #define XENSND_PCM_FORMAT_S24_LE 6 #define XENSND_PCM_FORMAT_S24_BE 7 #define XENSND_PCM_FORMAT_U24_LE 8 #define XENSND_PCM_FORMAT_U24_BE 9 #define XENSND_PCM_FORMAT_S32_LE 10 #define XENSND_PCM_FORMAT_S32_BE 11 #define XENSND_PCM_FORMAT_U32_LE 12 #define XENSND_PCM_FORMAT_U32_BE 13 #define XENSND_PCM_FORMAT_F32_LE 14 /* 4-byte float, IEEE-754 32-bit, */ #define XENSND_PCM_FORMAT_F32_BE 15 /* range -1.0 to 1.0 */ #define XENSND_PCM_FORMAT_F64_LE 16 /* 8-byte float, IEEE-754 64-bit, */ #define XENSND_PCM_FORMAT_F64_BE 17 /* range -1.0 to 1.0 */ #define XENSND_PCM_FORMAT_IEC958_SUBFRAME_LE 18 #define XENSND_PCM_FORMAT_IEC958_SUBFRAME_BE 19 #define XENSND_PCM_FORMAT_MU_LAW 20 #define XENSND_PCM_FORMAT_A_LAW 21 #define XENSND_PCM_FORMAT_IMA_ADPCM 22 #define XENSND_PCM_FORMAT_MPEG 23 #define XENSND_PCM_FORMAT_GSM 24 /* ****************************************************************************** * REQUEST CODES ****************************************************************************** */ #define XENSND_OP_OPEN 0 #define XENSND_OP_CLOSE 1 #define XENSND_OP_READ 2 #define XENSND_OP_WRITE 3 #define XENSND_OP_SET_VOLUME 4 #define XENSND_OP_GET_VOLUME 5 #define XENSND_OP_MUTE 6 #define XENSND_OP_UNMUTE 7 #define XENSND_OP_TRIGGER 8 #define XENSND_OP_HW_PARAM_QUERY 9 #define XENSND_OP_TRIGGER_START 0 #define XENSND_OP_TRIGGER_PAUSE 1 #define XENSND_OP_TRIGGER_STOP 2 #define XENSND_OP_TRIGGER_RESUME 3 /* ****************************************************************************** * EVENT CODES ****************************************************************************** */ #define XENSND_EVT_CUR_POS 0 /* ****************************************************************************** * XENSTORE FIELD AND PATH NAME STRINGS, HELPERS ****************************************************************************** */ #define XENSND_DRIVER_NAME "vsnd" #define XENSND_LIST_SEPARATOR "," /* Field names */ #define XENSND_FIELD_BE_VERSIONS "versions" #define XENSND_FIELD_FE_VERSION "version" #define XENSND_FIELD_VCARD_SHORT_NAME "short-name" #define XENSND_FIELD_VCARD_LONG_NAME "long-name" #define XENSND_FIELD_RING_REF "ring-ref" #define XENSND_FIELD_EVT_CHNL "event-channel" #define XENSND_FIELD_EVT_RING_REF "evt-ring-ref" #define XENSND_FIELD_EVT_EVT_CHNL "evt-event-channel" #define XENSND_FIELD_DEVICE_NAME "name" #define XENSND_FIELD_TYPE "type" #define XENSND_FIELD_STREAM_UNIQUE_ID "unique-id" #define XENSND_FIELD_CHANNELS_MIN "channels-min" #define XENSND_FIELD_CHANNELS_MAX "channels-max" #define XENSND_FIELD_SAMPLE_RATES "sample-rates" #define XENSND_FIELD_SAMPLE_FORMATS "sample-formats" #define XENSND_FIELD_BUFFER_SIZE "buffer-size" /* Stream type field values. */ #define XENSND_STREAM_TYPE_PLAYBACK "p" #define XENSND_STREAM_TYPE_CAPTURE "c" /* Sample rate max string length */ #define XENSND_SAMPLE_RATE_MAX_LEN 11 /* Sample format field values */ #define XENSND_SAMPLE_FORMAT_MAX_LEN 24 #define XENSND_PCM_FORMAT_S8_STR "s8" #define XENSND_PCM_FORMAT_U8_STR "u8" #define XENSND_PCM_FORMAT_S16_LE_STR "s16_le" #define XENSND_PCM_FORMAT_S16_BE_STR "s16_be" #define XENSND_PCM_FORMAT_U16_LE_STR "u16_le" #define XENSND_PCM_FORMAT_U16_BE_STR "u16_be" #define XENSND_PCM_FORMAT_S24_LE_STR "s24_le" #define XENSND_PCM_FORMAT_S24_BE_STR "s24_be" #define XENSND_PCM_FORMAT_U24_LE_STR "u24_le" #define XENSND_PCM_FORMAT_U24_BE_STR "u24_be" #define XENSND_PCM_FORMAT_S32_LE_STR "s32_le" #define XENSND_PCM_FORMAT_S32_BE_STR "s32_be" #define XENSND_PCM_FORMAT_U32_LE_STR "u32_le" #define XENSND_PCM_FORMAT_U32_BE_STR "u32_be" #define XENSND_PCM_FORMAT_F32_LE_STR "float_le" #define XENSND_PCM_FORMAT_F32_BE_STR "float_be" #define XENSND_PCM_FORMAT_F64_LE_STR "float64_le" #define XENSND_PCM_FORMAT_F64_BE_STR "float64_be" #define XENSND_PCM_FORMAT_IEC958_SUBFRAME_LE_STR "iec958_subframe_le" #define XENSND_PCM_FORMAT_IEC958_SUBFRAME_BE_STR "iec958_subframe_be" #define XENSND_PCM_FORMAT_MU_LAW_STR "mu_law" #define XENSND_PCM_FORMAT_A_LAW_STR "a_law" #define XENSND_PCM_FORMAT_IMA_ADPCM_STR "ima_adpcm" #define XENSND_PCM_FORMAT_MPEG_STR "mpeg" #define XENSND_PCM_FORMAT_GSM_STR "gsm" /* ****************************************************************************** * STATUS RETURN CODES ****************************************************************************** * * Status return code is zero on success and -XEN_EXX on failure. * ****************************************************************************** * Assumptions ****************************************************************************** * o usage of grant reference 0 as invalid grant reference: * grant reference 0 is valid, but never exposed to a PV driver, * because of the fact it is already in use/reserved by the PV console. * o all references in this document to page sizes must be treated * as pages of size XEN_PAGE_SIZE unless otherwise noted. * ****************************************************************************** * Description of the protocol between frontend and backend driver ****************************************************************************** * * The two halves of a Para-virtual sound driver communicate with * each other using shared pages and event channels. * Shared page contains a ring with request/response packets. * * Packets, used for input/output operations, e.g. read/write, set/get volume, * etc., provide offset/length fields in order to allow asynchronous protocol * operation with buffer space sharing: part of the buffer allocated at * XENSND_OP_OPEN can be used for audio samples and part, for example, * for volume control. * * All reserved fields in the structures below must be 0. * *---------------------------------- Requests --------------------------------- * * All request packets have the same length (64 octets) * All request packets have common header: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | operation | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * id - uint16_t, private guest value, echoed in response * operation - uint8_t, operation code, XENSND_OP_??? * * For all packets which use offset and length: * offset - uint32_t, read or write data offset within the shared buffer, * passed with XENSND_OP_OPEN request, octets, * [0; XENSND_OP_OPEN.buffer_sz - 1]. * length - uint32_t, read or write data length, octets * * Request open - open a PCM stream for playback or capture: * * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | XENSND_OP_OPEN | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | pcm_rate | 12 * +----------------+----------------+----------------+----------------+ * | pcm_format | pcm_channels | reserved | 16 * +----------------+----------------+----------------+----------------+ * | buffer_sz | 20 * +----------------+----------------+----------------+----------------+ * | gref_directory | 24 * +----------------+----------------+----------------+----------------+ * | period_sz | 28 * +----------------+----------------+----------------+----------------+ * | reserved | 32 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * pcm_rate - uint32_t, stream data rate, Hz * pcm_format - uint8_t, XENSND_PCM_FORMAT_XXX value * pcm_channels - uint8_t, number of channels of this stream, * [channels-min; channels-max] * buffer_sz - uint32_t, buffer size to be allocated, octets * period_sz - uint32_t, event period size, octets * This is the requested value of the period at which frontend would * like to receive XENSND_EVT_CUR_POS notifications from the backend when * stream position advances during playback/capture. * It shows how many octets are expected to be played/captured before * sending such an event. * If set to 0 no XENSND_EVT_CUR_POS events are sent by the backend. * * gref_directory - grant_ref_t, a reference to the first shared page * describing shared buffer references. At least one page exists. If shared * buffer size (buffer_sz) exceeds what can be addressed by this single page, * then reference to the next page must be supplied (see gref_dir_next_page * below) */ struct xensnd_open_req { uint32_t pcm_rate; uint8_t pcm_format; uint8_t pcm_channels; uint16_t reserved; uint32_t buffer_sz; grant_ref_t gref_directory; uint32_t period_sz; }; /* * Shared page for XENSND_OP_OPEN buffer descriptor (gref_directory in the * request) employs a list of pages, describing all pages of the shared data * buffer: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | gref_dir_next_page | 4 * +----------------+----------------+----------------+----------------+ * | gref[0] | 8 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | gref[i] | i*4+8 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | gref[N - 1] | N*4+8 * +----------------+----------------+----------------+----------------+ * * gref_dir_next_page - grant_ref_t, reference to the next page describing * page directory. Must be 0 if there are no more pages in the list. * gref[i] - grant_ref_t, reference to a shared page of the buffer * allocated at XENSND_OP_OPEN * * Number of grant_ref_t entries in the whole page directory is not * passed, but instead can be calculated as: * num_grefs_total = (XENSND_OP_OPEN.buffer_sz + XEN_PAGE_SIZE - 1) / * XEN_PAGE_SIZE */ struct xensnd_page_directory { grant_ref_t gref_dir_next_page; grant_ref_t gref[1]; /* Variable length */ }; /* * Request close - close an opened pcm stream: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | XENSND_OP_CLOSE| reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * Request read/write - used for read (for capture) or write (for playback): * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | operation | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | offset | 12 * +----------------+----------------+----------------+----------------+ * | length | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * operation - XENSND_OP_READ for read or XENSND_OP_WRITE for write */ struct xensnd_rw_req { uint32_t offset; uint32_t length; }; /* * Request set/get volume - set/get channels' volume of the stream given: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | operation | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | offset | 12 * +----------------+----------------+----------------+----------------+ * | length | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * operation - XENSND_OP_SET_VOLUME for volume set * or XENSND_OP_GET_VOLUME for volume get * Buffer passed with XENSND_OP_OPEN is used to exchange volume * values: * * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | channel[0] | 4 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | channel[i] | i*4 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | channel[N - 1] | (N-1)*4 * +----------------+----------------+----------------+----------------+ * * N = XENSND_OP_OPEN.pcm_channels * i - uint8_t, index of a channel * channel[i] - sint32_t, volume of i-th channel * Volume is expressed as a signed value in steps of 0.001 dB, * while 0 being 0 dB. * * Request mute/unmute - mute/unmute stream: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | operation | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | offset | 12 * +----------------+----------------+----------------+----------------+ * | length | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * operation - XENSND_OP_MUTE for mute or XENSND_OP_UNMUTE for unmute * Buffer passed with XENSND_OP_OPEN is used to exchange mute/unmute * values: * * 0 octet * +----------------+----------------+----------------+----------------+ * | channel[0] | 4 * +----------------+----------------+----------------+----------------+ * +/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | channel[i] | i*4 * +----------------+----------------+----------------+----------------+ * +/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | channel[N - 1] | (N-1)*4 * +----------------+----------------+----------------+----------------+ * * N = XENSND_OP_OPEN.pcm_channels * i - uint8_t, index of a channel * channel[i] - uint8_t, non-zero if i-th channel needs to be muted/unmuted * *------------------------------------ N.B. ----------------------------------- * * The 'struct xensnd_rw_req' is also used for XENSND_OP_SET_VOLUME, * XENSND_OP_GET_VOLUME, XENSND_OP_MUTE, XENSND_OP_UNMUTE. * * Request stream running state change - trigger PCM stream running state * to start, stop, pause or resume: * * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | _OP_TRIGGER | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | type | reserved | 12 * +----------------+----------------+----------------+----------------+ * | reserved | 16 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * type - uint8_t, XENSND_OP_TRIGGER_XXX value */ struct xensnd_trigger_req { uint8_t type; }; /* * Request stream parameter ranges: request intervals and * masks of supported ranges for stream configuration values. * * Sound device configuration for a particular stream is a limited subset * of the multidimensional configuration available on XenStore, e.g. * once the frame rate has been selected there is a limited supported range * for sample rates becomes available (which might be the same set configured * on XenStore or less). For example, selecting 96kHz sample rate may limit * number of channels available for such configuration from 4 to 2, etc. * Thus, each call to XENSND_OP_HW_PARAM_QUERY may reduce configuration * space making it possible to iteratively get the final stream configuration, * used in XENSND_OP_OPEN request. * * See response format for this request. * * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | _HW_PARAM_QUERY| reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | formats mask low 32-bit | 12 * +----------------+----------------+----------------+----------------+ * | formats mask high 32-bit | 16 * +----------------+----------------+----------------+----------------+ * | min rate | 20 * +----------------+----------------+----------------+----------------+ * | max rate | 24 * +----------------+----------------+----------------+----------------+ * | min channels | 28 * +----------------+----------------+----------------+----------------+ * | max channels | 32 * +----------------+----------------+----------------+----------------+ * | min buffer frames | 36 * +----------------+----------------+----------------+----------------+ * | max buffer frames | 40 * +----------------+----------------+----------------+----------------+ * | min period frames | 44 * +----------------+----------------+----------------+----------------+ * | max period frames | 48 * +----------------+----------------+----------------+----------------+ * | reserved | 52 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * formats - uint64_t, bit mask representing values of the parameter * made as bitwise OR of (1 << XENSND_PCM_FORMAT_XXX) values * * For interval parameters: * min - uint32_t, minimum value of the parameter * max - uint32_t, maximum value of the parameter * * Frame is defined as a product of the number of channels by the * number of octets per one sample. */ struct xensnd_query_hw_param { uint64_t formats; struct { uint32_t min; uint32_t max; } rates; struct { uint32_t min; uint32_t max; } channels; struct { uint32_t min; uint32_t max; } buffer; struct { uint32_t min; uint32_t max; } period; }; /* *---------------------------------- Responses -------------------------------- * * All response packets have the same length (64 octets) * * All response packets have common header: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | operation | reserved | 4 * +----------------+----------------+----------------+----------------+ * | status | 8 * +----------------+----------------+----------------+----------------+ * * id - uint16_t, copied from the request * operation - uint8_t, XENSND_OP_* - copied from request * status - int32_t, response status, zero on success and -XEN_EXX on failure * * * HW parameter query response - response for XENSND_OP_HW_PARAM_QUERY: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | operation | reserved | 4 * +----------------+----------------+----------------+----------------+ * | status | 8 * +----------------+----------------+----------------+----------------+ * | formats mask low 32-bit | 12 * +----------------+----------------+----------------+----------------+ * | formats mask high 32-bit | 16 * +----------------+----------------+----------------+----------------+ * | min rate | 20 * +----------------+----------------+----------------+----------------+ * | max rate | 24 * +----------------+----------------+----------------+----------------+ * | min channels | 28 * +----------------+----------------+----------------+----------------+ * | max channels | 32 * +----------------+----------------+----------------+----------------+ * | min buffer frames | 36 * +----------------+----------------+----------------+----------------+ * | max buffer frames | 40 * +----------------+----------------+----------------+----------------+ * | min period frames | 44 * +----------------+----------------+----------------+----------------+ * | max period frames | 48 * +----------------+----------------+----------------+----------------+ * | reserved | 52 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * Meaning of the values in this response is the same as for * XENSND_OP_HW_PARAM_QUERY request. */ /* *----------------------------------- Events ---------------------------------- * * Events are sent via shared page allocated by the front and propagated by * evt-event-channel/evt-ring-ref XenStore entries * All event packets have the same length (64 octets) * All event packets have common header: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | type | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * * id - uint16_t, event id, may be used by front * type - uint8_t, type of the event * * * Current stream position - event from back to front when stream's * playback/capture position has advanced: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | _EVT_CUR_POS | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | position low 32-bit | 12 * +----------------+----------------+----------------+----------------+ * | position high 32-bit | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * position - current value of stream's playback/capture position, octets * */ struct xensnd_cur_pos_evt { uint64_t position; }; struct xensnd_req { uint16_t id; uint8_t operation; uint8_t reserved[5]; union { struct xensnd_open_req open; struct xensnd_rw_req rw; struct xensnd_trigger_req trigger; struct xensnd_query_hw_param hw_param; uint8_t reserved[56]; } op; }; struct xensnd_resp { uint16_t id; uint8_t operation; uint8_t reserved; int32_t status; union { struct xensnd_query_hw_param hw_param; uint8_t reserved1[56]; } resp; }; struct xensnd_evt { uint16_t id; uint8_t type; uint8_t reserved[5]; union { struct xensnd_cur_pos_evt cur_pos; uint8_t reserved[56]; } op; }; DEFINE_RING_TYPES(xen_sndif, struct xensnd_req, struct xensnd_resp); /* ****************************************************************************** * Back to front events delivery ****************************************************************************** * In order to deliver asynchronous events from back to front a shared page is * allocated by front and its granted reference propagated to back via * XenStore entries (evt-ring-ref/evt-event-channel). * This page has a common header used by both front and back to synchronize * access and control event's ring buffer, while back being a producer of the * events and front being a consumer. The rest of the page after the header * is used for event packets. * * Upon reception of an event(s) front may confirm its reception * for either each event, group of events or none. */ struct xensnd_event_page { uint32_t in_cons; uint32_t in_prod; uint8_t reserved[56]; }; #define XENSND_EVENT_PAGE_SIZE XEN_PAGE_SIZE #define XENSND_IN_RING_OFFS (sizeof(struct xensnd_event_page)) #define XENSND_IN_RING_SIZE (XENSND_EVENT_PAGE_SIZE - XENSND_IN_RING_OFFS) #define XENSND_IN_RING_LEN (XENSND_IN_RING_SIZE / sizeof(struct xensnd_evt)) #define XENSND_IN_RING(page) \ ((struct xensnd_evt *)((char *)(page) + XENSND_IN_RING_OFFS)) #define XENSND_IN_RING_REF(page, idx) \ (XENSND_IN_RING((page))[(idx) % XENSND_IN_RING_LEN]) #endif /* __XEN_PUBLIC_IO_SNDIF_H__ */ interface/io/ring.h 0000644 00000056314 14722073410 0010234 0 ustar 00 /****************************************************************************** * ring.h * * Shared producer-consumer ring macros. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Tim Deegan and Andrew Warfield November 2004. */ #ifndef __XEN_PUBLIC_IO_RING_H__ #define __XEN_PUBLIC_IO_RING_H__ /* * When #include'ing this header, you need to provide the following * declaration upfront: * - standard integers types (uint8_t, uint16_t, etc) * They are provided by stdint.h of the standard headers. * * In addition, if you intend to use the FLEX macros, you also need to * provide the following, before invoking the FLEX macros: * - size_t * - memcpy * - grant_ref_t * These declarations are provided by string.h of the standard headers, * and grant_table.h from the Xen public headers. */ #include <xen/interface/grant_table.h> typedef unsigned int RING_IDX; /* Round a 32-bit unsigned constant down to the nearest power of two. */ #define __RD2(_x) (((_x) & 0x00000002) ? 0x2 : ((_x) & 0x1)) #define __RD4(_x) (((_x) & 0x0000000c) ? __RD2((_x)>>2)<<2 : __RD2(_x)) #define __RD8(_x) (((_x) & 0x000000f0) ? __RD4((_x)>>4)<<4 : __RD4(_x)) #define __RD16(_x) (((_x) & 0x0000ff00) ? __RD8((_x)>>8)<<8 : __RD8(_x)) #define __RD32(_x) (((_x) & 0xffff0000) ? __RD16((_x)>>16)<<16 : __RD16(_x)) /* * Calculate size of a shared ring, given the total available space for the * ring and indexes (_sz), and the name tag of the request/response structure. * A ring contains as many entries as will fit, rounded down to the nearest * power of two (so we can mask with (size-1) to loop around). */ #define __CONST_RING_SIZE(_s, _sz) \ (__RD32(((_sz) - offsetof(struct _s##_sring, ring)) / \ sizeof(((struct _s##_sring *)0)->ring[0]))) /* * The same for passing in an actual pointer instead of a name tag. */ #define __RING_SIZE(_s, _sz) \ (__RD32(((_sz) - (long)(_s)->ring + (long)(_s)) / sizeof((_s)->ring[0]))) /* * Macros to make the correct C datatypes for a new kind of ring. * * To make a new ring datatype, you need to have two message structures, * let's say request_t, and response_t already defined. * * In a header where you want the ring datatype declared, you then do: * * DEFINE_RING_TYPES(mytag, request_t, response_t); * * These expand out to give you a set of types, as you can see below. * The most important of these are: * * mytag_sring_t - The shared ring. * mytag_front_ring_t - The 'front' half of the ring. * mytag_back_ring_t - The 'back' half of the ring. * * To initialize a ring in your code you need to know the location and size * of the shared memory area (PAGE_SIZE, for instance). To initialise * the front half: * * mytag_front_ring_t front_ring; * SHARED_RING_INIT((mytag_sring_t *)shared_page); * FRONT_RING_INIT(&front_ring, (mytag_sring_t *)shared_page, PAGE_SIZE); * * Initializing the back follows similarly (note that only the front * initializes the shared ring): * * mytag_back_ring_t back_ring; * BACK_RING_INIT(&back_ring, (mytag_sring_t *)shared_page, PAGE_SIZE); */ #define DEFINE_RING_TYPES(__name, __req_t, __rsp_t) \ \ /* Shared ring entry */ \ union __name##_sring_entry { \ __req_t req; \ __rsp_t rsp; \ }; \ \ /* Shared ring page */ \ struct __name##_sring { \ RING_IDX req_prod, req_event; \ RING_IDX rsp_prod, rsp_event; \ uint8_t __pad[48]; \ union __name##_sring_entry ring[1]; /* variable-length */ \ }; \ \ /* "Front" end's private variables */ \ struct __name##_front_ring { \ RING_IDX req_prod_pvt; \ RING_IDX rsp_cons; \ unsigned int nr_ents; \ struct __name##_sring *sring; \ }; \ \ /* "Back" end's private variables */ \ struct __name##_back_ring { \ RING_IDX rsp_prod_pvt; \ RING_IDX req_cons; \ unsigned int nr_ents; \ struct __name##_sring *sring; \ }; \ \ /* * Macros for manipulating rings. * * FRONT_RING_whatever works on the "front end" of a ring: here * requests are pushed on to the ring and responses taken off it. * * BACK_RING_whatever works on the "back end" of a ring: here * requests are taken off the ring and responses put on. * * N.B. these macros do NO INTERLOCKS OR FLOW CONTROL. * This is OK in 1-for-1 request-response situations where the * requestor (front end) never has more than RING_SIZE()-1 * outstanding requests. */ /* Initialising empty rings */ #define SHARED_RING_INIT(_s) do { \ (_s)->req_prod = (_s)->rsp_prod = 0; \ (_s)->req_event = (_s)->rsp_event = 1; \ (void)memset((_s)->__pad, 0, sizeof((_s)->__pad)); \ } while(0) #define FRONT_RING_ATTACH(_r, _s, _i, __size) do { \ (_r)->req_prod_pvt = (_i); \ (_r)->rsp_cons = (_i); \ (_r)->nr_ents = __RING_SIZE(_s, __size); \ (_r)->sring = (_s); \ } while (0) #define FRONT_RING_INIT(_r, _s, __size) FRONT_RING_ATTACH(_r, _s, 0, __size) #define BACK_RING_ATTACH(_r, _s, _i, __size) do { \ (_r)->rsp_prod_pvt = (_i); \ (_r)->req_cons = (_i); \ (_r)->nr_ents = __RING_SIZE(_s, __size); \ (_r)->sring = (_s); \ } while (0) #define BACK_RING_INIT(_r, _s, __size) BACK_RING_ATTACH(_r, _s, 0, __size) /* How big is this ring? */ #define RING_SIZE(_r) \ ((_r)->nr_ents) /* Number of free requests (for use on front side only). */ #define RING_FREE_REQUESTS(_r) \ (RING_SIZE(_r) - ((_r)->req_prod_pvt - (_r)->rsp_cons)) /* Test if there is an empty slot available on the front ring. * (This is only meaningful from the front. ) */ #define RING_FULL(_r) \ (RING_FREE_REQUESTS(_r) == 0) /* Test if there are outstanding messages to be processed on a ring. */ #define RING_HAS_UNCONSUMED_RESPONSES(_r) \ ((_r)->sring->rsp_prod - (_r)->rsp_cons) #define RING_HAS_UNCONSUMED_REQUESTS(_r) ({ \ unsigned int req = (_r)->sring->req_prod - (_r)->req_cons; \ unsigned int rsp = RING_SIZE(_r) - \ ((_r)->req_cons - (_r)->rsp_prod_pvt); \ req < rsp ? req : rsp; \ }) /* Direct access to individual ring elements, by index. */ #define RING_GET_REQUEST(_r, _idx) \ (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req)) #define RING_GET_RESPONSE(_r, _idx) \ (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp)) /* * Get a local copy of a request/response. * * Use this in preference to RING_GET_{REQUEST,RESPONSE}() so all processing is * done on a local copy that cannot be modified by the other end. * * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this * to be ineffective where dest is a struct which consists of only bitfields. */ #define RING_COPY_(type, r, idx, dest) do { \ /* Use volatile to force the copy into dest. */ \ *(dest) = *(volatile typeof(dest))RING_GET_##type(r, idx); \ } while (0) #define RING_COPY_REQUEST(r, idx, req) RING_COPY_(REQUEST, r, idx, req) #define RING_COPY_RESPONSE(r, idx, rsp) RING_COPY_(RESPONSE, r, idx, rsp) /* Loop termination condition: Would the specified index overflow the ring? */ #define RING_REQUEST_CONS_OVERFLOW(_r, _cons) \ (((_cons) - (_r)->rsp_prod_pvt) >= RING_SIZE(_r)) /* Ill-behaved frontend determination: Can there be this many requests? */ #define RING_REQUEST_PROD_OVERFLOW(_r, _prod) \ (((_prod) - (_r)->rsp_prod_pvt) > RING_SIZE(_r)) /* Ill-behaved backend determination: Can there be this many responses? */ #define RING_RESPONSE_PROD_OVERFLOW(_r, _prod) \ (((_prod) - (_r)->rsp_cons) > RING_SIZE(_r)) #define RING_PUSH_REQUESTS(_r) do { \ virt_wmb(); /* back sees requests /before/ updated producer index */\ (_r)->sring->req_prod = (_r)->req_prod_pvt; \ } while (0) #define RING_PUSH_RESPONSES(_r) do { \ virt_wmb(); /* front sees resps /before/ updated producer index */ \ (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt; \ } while (0) /* * Notification hold-off (req_event and rsp_event): * * When queueing requests or responses on a shared ring, it may not always be * necessary to notify the remote end. For example, if requests are in flight * in a backend, the front may be able to queue further requests without * notifying the back (if the back checks for new requests when it queues * responses). * * When enqueuing requests or responses: * * Use RING_PUSH_{REQUESTS,RESPONSES}_AND_CHECK_NOTIFY(). The second argument * is a boolean return value. True indicates that the receiver requires an * asynchronous notification. * * After dequeuing requests or responses (before sleeping the connection): * * Use RING_FINAL_CHECK_FOR_REQUESTS() or RING_FINAL_CHECK_FOR_RESPONSES(). * The second argument is a boolean return value. True indicates that there * are pending messages on the ring (i.e., the connection should not be put * to sleep). * * These macros will set the req_event/rsp_event field to trigger a * notification on the very next message that is enqueued. If you want to * create batches of work (i.e., only receive a notification after several * messages have been enqueued) then you will need to create a customised * version of the FINAL_CHECK macro in your own code, which sets the event * field appropriately. */ #define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do { \ RING_IDX __old = (_r)->sring->req_prod; \ RING_IDX __new = (_r)->req_prod_pvt; \ virt_wmb(); /* back sees requests /before/ updated producer index */\ (_r)->sring->req_prod = __new; \ virt_mb(); /* back sees new requests /before/ we check req_event */ \ (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) < \ (RING_IDX)(__new - __old)); \ } while (0) #define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do { \ RING_IDX __old = (_r)->sring->rsp_prod; \ RING_IDX __new = (_r)->rsp_prod_pvt; \ virt_wmb(); /* front sees resps /before/ updated producer index */ \ (_r)->sring->rsp_prod = __new; \ virt_mb(); /* front sees new resps /before/ we check rsp_event */ \ (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) < \ (RING_IDX)(__new - __old)); \ } while (0) #define RING_FINAL_CHECK_FOR_REQUESTS(_r, _work_to_do) do { \ (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r); \ if (_work_to_do) break; \ (_r)->sring->req_event = (_r)->req_cons + 1; \ virt_mb(); \ (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r); \ } while (0) #define RING_FINAL_CHECK_FOR_RESPONSES(_r, _work_to_do) do { \ (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r); \ if (_work_to_do) break; \ (_r)->sring->rsp_event = (_r)->rsp_cons + 1; \ virt_mb(); \ (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r); \ } while (0) /* * DEFINE_XEN_FLEX_RING_AND_INTF defines two monodirectional rings and * functions to check if there is data on the ring, and to read and * write to them. * * DEFINE_XEN_FLEX_RING is similar to DEFINE_XEN_FLEX_RING_AND_INTF, but * does not define the indexes page. As different protocols can have * extensions to the basic format, this macro allow them to define their * own struct. * * XEN_FLEX_RING_SIZE * Convenience macro to calculate the size of one of the two rings * from the overall order. * * $NAME_mask * Function to apply the size mask to an index, to reduce the index * within the range [0-size]. * * $NAME_read_packet * Function to read data from the ring. The amount of data to read is * specified by the "size" argument. * * $NAME_write_packet * Function to write data to the ring. The amount of data to write is * specified by the "size" argument. * * $NAME_get_ring_ptr * Convenience function that returns a pointer to read/write to the * ring at the right location. * * $NAME_data_intf * Indexes page, shared between frontend and backend. It also * contains the array of grant refs. * * $NAME_queued * Function to calculate how many bytes are currently on the ring, * ready to be read. It can also be used to calculate how much free * space is currently on the ring (XEN_FLEX_RING_SIZE() - * $NAME_queued()). */ #ifndef XEN_PAGE_SHIFT /* The PAGE_SIZE for ring protocols and hypercall interfaces is always * 4K, regardless of the architecture, and page granularity chosen by * operating systems. */ #define XEN_PAGE_SHIFT 12 #endif #define XEN_FLEX_RING_SIZE(order) \ (1UL << ((order) + XEN_PAGE_SHIFT - 1)) #define DEFINE_XEN_FLEX_RING(name) \ static inline RING_IDX name##_mask(RING_IDX idx, RING_IDX ring_size) \ { \ return idx & (ring_size - 1); \ } \ \ static inline unsigned char *name##_get_ring_ptr(unsigned char *buf, \ RING_IDX idx, \ RING_IDX ring_size) \ { \ return buf + name##_mask(idx, ring_size); \ } \ \ static inline void name##_read_packet(void *opaque, \ const unsigned char *buf, \ size_t size, \ RING_IDX masked_prod, \ RING_IDX *masked_cons, \ RING_IDX ring_size) \ { \ if (*masked_cons < masked_prod || \ size <= ring_size - *masked_cons) { \ memcpy(opaque, buf + *masked_cons, size); \ } else { \ memcpy(opaque, buf + *masked_cons, ring_size - *masked_cons); \ memcpy((unsigned char *)opaque + ring_size - *masked_cons, buf, \ size - (ring_size - *masked_cons)); \ } \ *masked_cons = name##_mask(*masked_cons + size, ring_size); \ } \ \ static inline void name##_write_packet(unsigned char *buf, \ const void *opaque, \ size_t size, \ RING_IDX *masked_prod, \ RING_IDX masked_cons, \ RING_IDX ring_size) \ { \ if (*masked_prod < masked_cons || \ size <= ring_size - *masked_prod) { \ memcpy(buf + *masked_prod, opaque, size); \ } else { \ memcpy(buf + *masked_prod, opaque, ring_size - *masked_prod); \ memcpy(buf, (unsigned char *)opaque + (ring_size - *masked_prod), \ size - (ring_size - *masked_prod)); \ } \ *masked_prod = name##_mask(*masked_prod + size, ring_size); \ } \ \ static inline RING_IDX name##_queued(RING_IDX prod, \ RING_IDX cons, \ RING_IDX ring_size) \ { \ RING_IDX size; \ \ if (prod == cons) \ return 0; \ \ prod = name##_mask(prod, ring_size); \ cons = name##_mask(cons, ring_size); \ \ if (prod == cons) \ return ring_size; \ \ if (prod > cons) \ size = prod - cons; \ else \ size = ring_size - (cons - prod); \ return size; \ } \ \ struct name##_data { \ unsigned char *in; /* half of the allocation */ \ unsigned char *out; /* half of the allocation */ \ } #define DEFINE_XEN_FLEX_RING_AND_INTF(name) \ struct name##_data_intf { \ RING_IDX in_cons, in_prod; \ \ uint8_t pad1[56]; \ \ RING_IDX out_cons, out_prod; \ \ uint8_t pad2[56]; \ \ RING_IDX ring_order; \ grant_ref_t ref[]; \ }; \ DEFINE_XEN_FLEX_RING(name) #endif /* __XEN_PUBLIC_IO_RING_H__ */ interface/io/xenbus.h 0000644 00000002450 14722073410 0010571 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /***************************************************************************** * xenbus.h * * Xenbus protocol details. * * Copyright (C) 2005 XenSource Ltd. */ #ifndef _XEN_PUBLIC_IO_XENBUS_H #define _XEN_PUBLIC_IO_XENBUS_H /* The state of either end of the Xenbus, i.e. the current communication status of initialisation across the bus. States here imply nothing about the state of the connection between the driver and the kernel's device layers. */ enum xenbus_state { XenbusStateUnknown = 0, XenbusStateInitialising = 1, XenbusStateInitWait = 2, /* Finished early initialisation, but waiting for information from the peer or hotplug scripts. */ XenbusStateInitialised = 3, /* Initialised and waiting for a connection from the peer. */ XenbusStateConnected = 4, XenbusStateClosing = 5, /* The device is being closed due to an error or an unplug event. */ XenbusStateClosed = 6, /* * Reconfiguring: The device is being reconfigured. */ XenbusStateReconfiguring = 7, XenbusStateReconfigured = 8 }; #endif /* _XEN_PUBLIC_IO_XENBUS_H */ /* * Local variables: * c-file-style: "linux" * indent-tabs-mode: t * c-indent-level: 8 * c-basic-offset: 8 * tab-width: 8 * End: */ interface/io/tpmif.h 0000644 00000003251 14722073410 0010404 0 ustar 00 /****************************************************************************** * tpmif.h * * TPM I/O interface for Xen guest OSes, v2 * * This file is in the public domain. * */ #ifndef __XEN_PUBLIC_IO_TPMIF_H__ #define __XEN_PUBLIC_IO_TPMIF_H__ /* * Xenbus state machine * * Device open: * 1. Both ends start in XenbusStateInitialising * 2. Backend transitions to InitWait (frontend does not wait on this step) * 3. Frontend populates ring-ref, event-channel, feature-protocol-v2 * 4. Frontend transitions to Initialised * 5. Backend maps grant and event channel, verifies feature-protocol-v2 * 6. Backend transitions to Connected * 7. Frontend verifies feature-protocol-v2, transitions to Connected * * Device close: * 1. State is changed to XenbusStateClosing * 2. Frontend transitions to Closed * 3. Backend unmaps grant and event, changes state to InitWait */ enum vtpm_shared_page_state { VTPM_STATE_IDLE, /* no contents / vTPM idle / cancel complete */ VTPM_STATE_SUBMIT, /* request ready / vTPM working */ VTPM_STATE_FINISH, /* response ready / vTPM idle */ VTPM_STATE_CANCEL, /* cancel requested / vTPM working */ }; /* The backend should only change state to IDLE or FINISH, while the * frontend should only change to SUBMIT or CANCEL. */ struct vtpm_shared_page { uint32_t length; /* request/response length in bytes */ uint8_t state; /* enum vtpm_shared_page_state */ uint8_t locality; /* for the current request */ uint8_t pad; uint8_t nr_extra_pages; /* extra pages for long packets; may be zero */ uint32_t extra_pages[0]; /* grant IDs; length in nr_extra_pages */ }; #endif interface/io/console.h 0000644 00000001113 14722073410 0010722 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /****************************************************************************** * console.h * * Console I/O interface for Xen guest OSes. * * Copyright (c) 2005, Keir Fraser */ #ifndef __XEN_PUBLIC_IO_CONSOLE_H__ #define __XEN_PUBLIC_IO_CONSOLE_H__ typedef uint32_t XENCONS_RING_IDX; #define MASK_XENCONS_IDX(idx, ring) ((idx) & (sizeof(ring)-1)) struct xencons_interface { char in[1024]; char out[2048]; XENCONS_RING_IDX in_cons, in_prod; XENCONS_RING_IDX out_cons, out_prod; }; #endif /* __XEN_PUBLIC_IO_CONSOLE_H__ */ interface/io/pciif.h 0000644 00000007021 14722073410 0010356 0 ustar 00 /* * PCI Backend/Frontend Common Data Structures & Macros * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Author: Ryan Wilson <hap9@epoch.ncsc.mil> */ #ifndef __XEN_PCI_COMMON_H__ #define __XEN_PCI_COMMON_H__ /* Be sure to bump this number if you change this file */ #define XEN_PCI_MAGIC "7" /* xen_pci_sharedinfo flags */ #define _XEN_PCIF_active (0) #define XEN_PCIF_active (1<<_XEN_PCIF_active) #define _XEN_PCIB_AERHANDLER (1) #define XEN_PCIB_AERHANDLER (1<<_XEN_PCIB_AERHANDLER) #define _XEN_PCIB_active (2) #define XEN_PCIB_active (1<<_XEN_PCIB_active) /* xen_pci_op commands */ #define XEN_PCI_OP_conf_read (0) #define XEN_PCI_OP_conf_write (1) #define XEN_PCI_OP_enable_msi (2) #define XEN_PCI_OP_disable_msi (3) #define XEN_PCI_OP_enable_msix (4) #define XEN_PCI_OP_disable_msix (5) #define XEN_PCI_OP_aer_detected (6) #define XEN_PCI_OP_aer_resume (7) #define XEN_PCI_OP_aer_mmio (8) #define XEN_PCI_OP_aer_slotreset (9) /* xen_pci_op error numbers */ #define XEN_PCI_ERR_success (0) #define XEN_PCI_ERR_dev_not_found (-1) #define XEN_PCI_ERR_invalid_offset (-2) #define XEN_PCI_ERR_access_denied (-3) #define XEN_PCI_ERR_not_implemented (-4) /* XEN_PCI_ERR_op_failed - backend failed to complete the operation */ #define XEN_PCI_ERR_op_failed (-5) /* * it should be PAGE_SIZE-sizeof(struct xen_pci_op))/sizeof(struct msix_entry)) * Should not exceed 128 */ #define SH_INFO_MAX_VEC 128 struct xen_msix_entry { uint16_t vector; uint16_t entry; }; struct xen_pci_op { /* IN: what action to perform: XEN_PCI_OP_* */ uint32_t cmd; /* OUT: will contain an error number (if any) from errno.h */ int32_t err; /* IN: which device to touch */ uint32_t domain; /* PCI Domain/Segment */ uint32_t bus; uint32_t devfn; /* IN: which configuration registers to touch */ int32_t offset; int32_t size; /* IN/OUT: Contains the result after a READ or the value to WRITE */ uint32_t value; /* IN: Contains extra infor for this operation */ uint32_t info; /*IN: param for msi-x */ struct xen_msix_entry msix_entries[SH_INFO_MAX_VEC]; }; /*used for pcie aer handling*/ struct xen_pcie_aer_op { /* IN: what action to perform: XEN_PCI_OP_* */ uint32_t cmd; /*IN/OUT: return aer_op result or carry error_detected state as input*/ int32_t err; /* IN: which device to touch */ uint32_t domain; /* PCI Domain/Segment*/ uint32_t bus; uint32_t devfn; }; struct xen_pci_sharedinfo { /* flags - XEN_PCIF_* */ uint32_t flags; struct xen_pci_op op; struct xen_pcie_aer_op aer_op; }; #endif /* __XEN_PCI_COMMON_H__ */ interface/io/protocols.h 0000644 00000001271 14722073410 0011311 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef __XEN_PROTOCOLS_H__ #define __XEN_PROTOCOLS_H__ #define XEN_IO_PROTO_ABI_X86_32 "x86_32-abi" #define XEN_IO_PROTO_ABI_X86_64 "x86_64-abi" #define XEN_IO_PROTO_ABI_POWERPC64 "powerpc64-abi" #define XEN_IO_PROTO_ABI_ARM "arm-abi" #if defined(__i386__) # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32 #elif defined(__x86_64__) # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_64 #elif defined(__powerpc64__) # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64 #elif defined(__arm__) || defined(__aarch64__) # define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_ARM #else # error arch fixup needed here #endif #endif interface/io/displif.h 0000644 00000116464 14722073410 0010732 0 ustar 00 /****************************************************************************** * displif.h * * Unified display device I/O interface for Xen guest OSes. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (C) 2016-2017 EPAM Systems Inc. * * Authors: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com> * Oleksandr Grytsov <oleksandr_grytsov@epam.com> */ #ifndef __XEN_PUBLIC_IO_DISPLIF_H__ #define __XEN_PUBLIC_IO_DISPLIF_H__ #include "ring.h" #include "../grant_table.h" /* ****************************************************************************** * Protocol version ****************************************************************************** */ #define XENDISPL_PROTOCOL_VERSION "1" /* ****************************************************************************** * Main features provided by the protocol ****************************************************************************** * This protocol aims to provide a unified protocol which fits more * sophisticated use-cases than a framebuffer device can handle. At the * moment basic functionality is supported with the intention to be extended: * o multiple dynamically allocated/destroyed framebuffers * o buffers of arbitrary sizes * o buffer allocation at either back or front end * o better configuration options including multiple display support * * Note: existing fbif can be used together with displif running at the * same time, e.g. on Linux one provides framebuffer and another DRM/KMS * * Note: display resolution (XenStore's "resolution" property) defines * visible area of the virtual display. At the same time resolution of * the display and frame buffers may differ: buffers can be smaller, equal * or bigger than the visible area. This is to enable use-cases, where backend * may do some post-processing of the display and frame buffers supplied, * e.g. those buffers can be just a part of the final composition. * ****************************************************************************** * Direction of improvements ****************************************************************************** * Future extensions to the existing protocol may include: * o display/connector cloning * o allocation of objects other than display buffers * o plane/overlay support * o scaling support * o rotation support * ****************************************************************************** * Feature and Parameter Negotiation ****************************************************************************** * * Front->back notifications: when enqueuing a new request, sending a * notification can be made conditional on xendispl_req (i.e., the generic * hold-off mechanism provided by the ring macros). Backends must set * xendispl_req appropriately (e.g., using RING_FINAL_CHECK_FOR_REQUESTS()). * * Back->front notifications: when enqueuing a new response, sending a * notification can be made conditional on xendispl_resp (i.e., the generic * hold-off mechanism provided by the ring macros). Frontends must set * xendispl_resp appropriately (e.g., using RING_FINAL_CHECK_FOR_RESPONSES()). * * The two halves of a para-virtual display driver utilize nodes within * XenStore to communicate capabilities and to negotiate operating parameters. * This section enumerates these nodes which reside in the respective front and * backend portions of XenStore, following the XenBus convention. * * All data in XenStore is stored as strings. Nodes specifying numeric * values are encoded in decimal. Integer value ranges listed below are * expressed as fixed sized integer types capable of storing the conversion * of a properly formated node string, without loss of information. * ****************************************************************************** * Example configuration ****************************************************************************** * * Note: depending on the use-case backend can expose more display connectors * than the underlying HW physically has by employing SW graphics compositors * * This is an example of backend and frontend configuration: * *--------------------------------- Backend ----------------------------------- * * /local/domain/0/backend/vdispl/1/0/frontend-id = "1" * /local/domain/0/backend/vdispl/1/0/frontend = "/local/domain/1/device/vdispl/0" * /local/domain/0/backend/vdispl/1/0/state = "4" * /local/domain/0/backend/vdispl/1/0/versions = "1,2" * *--------------------------------- Frontend ---------------------------------- * * /local/domain/1/device/vdispl/0/backend-id = "0" * /local/domain/1/device/vdispl/0/backend = "/local/domain/0/backend/vdispl/1/0" * /local/domain/1/device/vdispl/0/state = "4" * /local/domain/1/device/vdispl/0/version = "1" * /local/domain/1/device/vdispl/0/be-alloc = "1" * *-------------------------- Connector 0 configuration ------------------------ * * /local/domain/1/device/vdispl/0/0/resolution = "1920x1080" * /local/domain/1/device/vdispl/0/0/req-ring-ref = "2832" * /local/domain/1/device/vdispl/0/0/req-event-channel = "15" * /local/domain/1/device/vdispl/0/0/evt-ring-ref = "387" * /local/domain/1/device/vdispl/0/0/evt-event-channel = "16" * *-------------------------- Connector 1 configuration ------------------------ * * /local/domain/1/device/vdispl/0/1/resolution = "800x600" * /local/domain/1/device/vdispl/0/1/req-ring-ref = "2833" * /local/domain/1/device/vdispl/0/1/req-event-channel = "17" * /local/domain/1/device/vdispl/0/1/evt-ring-ref = "388" * /local/domain/1/device/vdispl/0/1/evt-event-channel = "18" * ****************************************************************************** * Backend XenBus Nodes ****************************************************************************** * *----------------------------- Protocol version ------------------------------ * * versions * Values: <string> * * List of XENDISPL_LIST_SEPARATOR separated protocol versions supported * by the backend. For example "1,2,3". * ****************************************************************************** * Frontend XenBus Nodes ****************************************************************************** * *-------------------------------- Addressing --------------------------------- * * dom-id * Values: <uint16_t> * * Domain identifier. * * dev-id * Values: <uint16_t> * * Device identifier. * * conn-idx * Values: <uint8_t> * * Zero based contigous index of the connector. * /local/domain/<dom-id>/device/vdispl/<dev-id>/<conn-idx>/... * *----------------------------- Protocol version ------------------------------ * * version * Values: <string> * * Protocol version, chosen among the ones supported by the backend. * *------------------------- Backend buffer allocation ------------------------- * * be-alloc * Values: "0", "1" * * If value is set to "1", then backend can be a buffer provider/allocator * for this domain during XENDISPL_OP_DBUF_CREATE operation (see below * for negotiation). * If value is not "1" or omitted frontend must allocate buffers itself. * *----------------------------- Connector settings ---------------------------- * * unique-id * Values: <string> * * After device instance initialization each connector is assigned a * unique ID, so it can be identified by the backend by this ID. * This can be UUID or such. * * resolution * Values: <width, uint32_t>x<height, uint32_t> * * Width and height of the connector in pixels separated by * XENDISPL_RESOLUTION_SEPARATOR. This defines visible area of the * display. * *------------------ Connector Request Transport Parameters ------------------- * * This communication path is used to deliver requests from frontend to backend * and get the corresponding responses from backend to frontend, * set up per connector. * * req-event-channel * Values: <uint32_t> * * The identifier of the Xen connector's control event channel * used to signal activity in the ring buffer. * * req-ring-ref * Values: <uint32_t> * * The Xen grant reference granting permission for the backend to map * a sole page of connector's control ring buffer. * *------------------- Connector Event Transport Parameters -------------------- * * This communication path is used to deliver asynchronous events from backend * to frontend, set up per connector. * * evt-event-channel * Values: <uint32_t> * * The identifier of the Xen connector's event channel * used to signal activity in the ring buffer. * * evt-ring-ref * Values: <uint32_t> * * The Xen grant reference granting permission for the backend to map * a sole page of connector's event ring buffer. */ /* ****************************************************************************** * STATE DIAGRAMS ****************************************************************************** * * Tool stack creates front and back state nodes with initial state * XenbusStateInitialising. * Tool stack creates and sets up frontend display configuration * nodes per domain. * *-------------------------------- Normal flow -------------------------------- * * Front Back * ================================= ===================================== * XenbusStateInitialising XenbusStateInitialising * o Query backend device identification * data. * o Open and validate backend device. * | * | * V * XenbusStateInitWait * * o Query frontend configuration * o Allocate and initialize * event channels per configured * connector. * o Publish transport parameters * that will be in effect during * this connection. * | * | * V * XenbusStateInitialised * * o Query frontend transport parameters. * o Connect to the event channels. * | * | * V * XenbusStateConnected * * o Create and initialize OS * virtual display connectors * as per configuration. * | * | * V * XenbusStateConnected * * XenbusStateUnknown * XenbusStateClosed * XenbusStateClosing * o Remove virtual display device * o Remove event channels * | * | * V * XenbusStateClosed * *------------------------------- Recovery flow ------------------------------- * * In case of frontend unrecoverable errors backend handles that as * if frontend goes into the XenbusStateClosed state. * * In case of backend unrecoverable errors frontend tries removing * the virtualized device. If this is possible at the moment of error, * then frontend goes into the XenbusStateInitialising state and is ready for * new connection with backend. If the virtualized device is still in use and * cannot be removed, then frontend goes into the XenbusStateReconfiguring state * until either the virtualized device is removed or backend initiates a new * connection. On the virtualized device removal frontend goes into the * XenbusStateInitialising state. * * Note on XenbusStateReconfiguring state of the frontend: if backend has * unrecoverable errors then frontend cannot send requests to the backend * and thus cannot provide functionality of the virtualized device anymore. * After backend is back to normal the virtualized device may still hold some * state: configuration in use, allocated buffers, client application state etc. * In most cases, this will require frontend to implement complex recovery * reconnect logic. Instead, by going into XenbusStateReconfiguring state, * frontend will make sure no new clients of the virtualized device are * accepted, allow existing client(s) to exit gracefully by signaling error * state etc. * Once all the clients are gone frontend can reinitialize the virtualized * device and get into XenbusStateInitialising state again signaling the * backend that a new connection can be made. * * There are multiple conditions possible under which frontend will go from * XenbusStateReconfiguring into XenbusStateInitialising, some of them are OS * specific. For example: * 1. The underlying OS framework may provide callbacks to signal that the last * client of the virtualized device has gone and the device can be removed * 2. Frontend can schedule a deferred work (timer/tasklet/workqueue) * to periodically check if this is the right time to re-try removal of * the virtualized device. * 3. By any other means. * ****************************************************************************** * REQUEST CODES ****************************************************************************** * Request codes [0; 15] are reserved and must not be used */ #define XENDISPL_OP_DBUF_CREATE 0x10 #define XENDISPL_OP_DBUF_DESTROY 0x11 #define XENDISPL_OP_FB_ATTACH 0x12 #define XENDISPL_OP_FB_DETACH 0x13 #define XENDISPL_OP_SET_CONFIG 0x14 #define XENDISPL_OP_PG_FLIP 0x15 /* ****************************************************************************** * EVENT CODES ****************************************************************************** */ #define XENDISPL_EVT_PG_FLIP 0x00 /* ****************************************************************************** * XENSTORE FIELD AND PATH NAME STRINGS, HELPERS ****************************************************************************** */ #define XENDISPL_DRIVER_NAME "vdispl" #define XENDISPL_LIST_SEPARATOR "," #define XENDISPL_RESOLUTION_SEPARATOR "x" #define XENDISPL_FIELD_BE_VERSIONS "versions" #define XENDISPL_FIELD_FE_VERSION "version" #define XENDISPL_FIELD_REQ_RING_REF "req-ring-ref" #define XENDISPL_FIELD_REQ_CHANNEL "req-event-channel" #define XENDISPL_FIELD_EVT_RING_REF "evt-ring-ref" #define XENDISPL_FIELD_EVT_CHANNEL "evt-event-channel" #define XENDISPL_FIELD_RESOLUTION "resolution" #define XENDISPL_FIELD_BE_ALLOC "be-alloc" #define XENDISPL_FIELD_UNIQUE_ID "unique-id" /* ****************************************************************************** * STATUS RETURN CODES ****************************************************************************** * * Status return code is zero on success and -XEN_EXX on failure. * ****************************************************************************** * Assumptions ****************************************************************************** * o usage of grant reference 0 as invalid grant reference: * grant reference 0 is valid, but never exposed to a PV driver, * because of the fact it is already in use/reserved by the PV console. * o all references in this document to page sizes must be treated * as pages of size XEN_PAGE_SIZE unless otherwise noted. * ****************************************************************************** * Description of the protocol between frontend and backend driver ****************************************************************************** * * The two halves of a Para-virtual display driver communicate with * each other using shared pages and event channels. * Shared page contains a ring with request/response packets. * * All reserved fields in the structures below must be 0. * Display buffers's cookie of value 0 is treated as invalid. * Framebuffer's cookie of value 0 is treated as invalid. * * For all request/response/event packets that use cookies: * dbuf_cookie - uint64_t, unique to guest domain value used by the backend * to map remote display buffer to its local one * fb_cookie - uint64_t, unique to guest domain value used by the backend * to map remote framebuffer to its local one * *---------------------------------- Requests --------------------------------- * * All requests/responses, which are not connector specific, must be sent over * control ring of the connector which has the index value of 0: * /local/domain/<dom-id>/device/vdispl/<dev-id>/0/req-ring-ref * * All request packets have the same length (64 octets) * All request packets have common header: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | operation | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * id - uint16_t, private guest value, echoed in response * operation - uint8_t, operation code, XENDISPL_OP_??? * * Request dbuf creation - request creation of a display buffer. * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id |_OP_DBUF_CREATE | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | dbuf_cookie low 32-bit | 12 * +----------------+----------------+----------------+----------------+ * | dbuf_cookie high 32-bit | 16 * +----------------+----------------+----------------+----------------+ * | width | 20 * +----------------+----------------+----------------+----------------+ * | height | 24 * +----------------+----------------+----------------+----------------+ * | bpp | 28 * +----------------+----------------+----------------+----------------+ * | buffer_sz | 32 * +----------------+----------------+----------------+----------------+ * | flags | 36 * +----------------+----------------+----------------+----------------+ * | gref_directory | 40 * +----------------+----------------+----------------+----------------+ * | reserved | 44 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * Must be sent over control ring of the connector which has the index * value of 0: * /local/domain/<dom-id>/device/vdispl/<dev-id>/0/req-ring-ref * All unused bits in flags field must be set to 0. * * An attempt to create multiple display buffers with the same dbuf_cookie is * an error. dbuf_cookie can be re-used after destroying the corresponding * display buffer. * * Width and height of the display buffers can be smaller, equal or bigger * than the connector's resolution. Depth/pixel format of the individual * buffers can differ as well. * * width - uint32_t, width in pixels * height - uint32_t, height in pixels * bpp - uint32_t, bits per pixel * buffer_sz - uint32_t, buffer size to be allocated, octets * flags - uint32_t, flags of the operation * o XENDISPL_DBUF_FLG_REQ_ALLOC - if set, then backend is requested * to allocate the buffer with the parameters provided in this request. * Page directory is handled as follows: * Frontend on request: * o allocates pages for the directory (gref_directory, * gref_dir_next_page(s) * o grants permissions for the pages of the directory to the backend * o sets gref_dir_next_page fields * Backend on response: * o grants permissions for the pages of the buffer allocated to * the frontend * o fills in page directory with grant references * (gref[] in struct xendispl_page_directory) * gref_directory - grant_ref_t, a reference to the first shared page * describing shared buffer references. At least one page exists. If shared * buffer size (buffer_sz) exceeds what can be addressed by this single page, * then reference to the next page must be supplied (see gref_dir_next_page * below) */ #define XENDISPL_DBUF_FLG_REQ_ALLOC (1 << 0) struct xendispl_dbuf_create_req { uint64_t dbuf_cookie; uint32_t width; uint32_t height; uint32_t bpp; uint32_t buffer_sz; uint32_t flags; grant_ref_t gref_directory; }; /* * Shared page for XENDISPL_OP_DBUF_CREATE buffer descriptor (gref_directory in * the request) employs a list of pages, describing all pages of the shared * data buffer: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | gref_dir_next_page | 4 * +----------------+----------------+----------------+----------------+ * | gref[0] | 8 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | gref[i] | i*4+8 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | gref[N - 1] | N*4+8 * +----------------+----------------+----------------+----------------+ * * gref_dir_next_page - grant_ref_t, reference to the next page describing * page directory. Must be 0 if there are no more pages in the list. * gref[i] - grant_ref_t, reference to a shared page of the buffer * allocated at XENDISPL_OP_DBUF_CREATE * * Number of grant_ref_t entries in the whole page directory is not * passed, but instead can be calculated as: * num_grefs_total = (XENDISPL_OP_DBUF_CREATE.buffer_sz + XEN_PAGE_SIZE - 1) / * XEN_PAGE_SIZE */ struct xendispl_page_directory { grant_ref_t gref_dir_next_page; grant_ref_t gref[1]; /* Variable length */ }; /* * Request dbuf destruction - destroy a previously allocated display buffer: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id |_OP_DBUF_DESTROY| reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | dbuf_cookie low 32-bit | 12 * +----------------+----------------+----------------+----------------+ * | dbuf_cookie high 32-bit | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * Must be sent over control ring of the connector which has the index * value of 0: * /local/domain/<dom-id>/device/vdispl/<dev-id>/0/req-ring-ref */ struct xendispl_dbuf_destroy_req { uint64_t dbuf_cookie; }; /* * Request framebuffer attachment - request attachment of a framebuffer to * previously created display buffer. * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | _OP_FB_ATTACH | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | dbuf_cookie low 32-bit | 12 * +----------------+----------------+----------------+----------------+ * | dbuf_cookie high 32-bit | 16 * +----------------+----------------+----------------+----------------+ * | fb_cookie low 32-bit | 20 * +----------------+----------------+----------------+----------------+ * | fb_cookie high 32-bit | 24 * +----------------+----------------+----------------+----------------+ * | width | 28 * +----------------+----------------+----------------+----------------+ * | height | 32 * +----------------+----------------+----------------+----------------+ * | pixel_format | 36 * +----------------+----------------+----------------+----------------+ * | reserved | 40 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * Must be sent over control ring of the connector which has the index * value of 0: * /local/domain/<dom-id>/device/vdispl/<dev-id>/0/req-ring-ref * Width and height can be smaller, equal or bigger than the connector's * resolution. * * An attempt to create multiple frame buffers with the same fb_cookie is * an error. fb_cookie can be re-used after destroying the corresponding * frame buffer. * * width - uint32_t, width in pixels * height - uint32_t, height in pixels * pixel_format - uint32_t, pixel format of the framebuffer, FOURCC code */ struct xendispl_fb_attach_req { uint64_t dbuf_cookie; uint64_t fb_cookie; uint32_t width; uint32_t height; uint32_t pixel_format; }; /* * Request framebuffer detach - detach a previously * attached framebuffer from the display buffer in request: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | _OP_FB_DETACH | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | fb_cookie low 32-bit | 12 * +----------------+----------------+----------------+----------------+ * | fb_cookie high 32-bit | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * Must be sent over control ring of the connector which has the index * value of 0: * /local/domain/<dom-id>/device/vdispl/<dev-id>/0/req-ring-ref */ struct xendispl_fb_detach_req { uint64_t fb_cookie; }; /* * Request configuration set/reset - request to set or reset * the configuration/mode of the display: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | _OP_SET_CONFIG | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | fb_cookie low 32-bit | 12 * +----------------+----------------+----------------+----------------+ * | fb_cookie high 32-bit | 16 * +----------------+----------------+----------------+----------------+ * | x | 20 * +----------------+----------------+----------------+----------------+ * | y | 24 * +----------------+----------------+----------------+----------------+ * | width | 28 * +----------------+----------------+----------------+----------------+ * | height | 32 * +----------------+----------------+----------------+----------------+ * | bpp | 40 * +----------------+----------------+----------------+----------------+ * | reserved | 44 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * Pass all zeros to reset, otherwise command is treated as * configuration set. * Framebuffer's cookie defines which framebuffer/dbuf must be * displayed while enabling display (applying configuration). * x, y, width and height are bound by the connector's resolution and must not * exceed it. * * x - uint32_t, starting position in pixels by X axis * y - uint32_t, starting position in pixels by Y axis * width - uint32_t, width in pixels * height - uint32_t, height in pixels * bpp - uint32_t, bits per pixel */ struct xendispl_set_config_req { uint64_t fb_cookie; uint32_t x; uint32_t y; uint32_t width; uint32_t height; uint32_t bpp; }; /* * Request page flip - request to flip a page identified by the framebuffer * cookie: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | _OP_PG_FLIP | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | fb_cookie low 32-bit | 12 * +----------------+----------------+----------------+----------------+ * | fb_cookie high 32-bit | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ */ struct xendispl_page_flip_req { uint64_t fb_cookie; }; /* *---------------------------------- Responses -------------------------------- * * All response packets have the same length (64 octets) * * All response packets have common header: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | reserved | 4 * +----------------+----------------+----------------+----------------+ * | status | 8 * +----------------+----------------+----------------+----------------+ * | reserved | 12 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ * * id - uint16_t, private guest value, echoed from request * status - int32_t, response status, zero on success and -XEN_EXX on failure * *----------------------------------- Events ---------------------------------- * * Events are sent via a shared page allocated by the front and propagated by * evt-event-channel/evt-ring-ref XenStore entries * All event packets have the same length (64 octets) * All event packets have common header: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | type | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * * id - uint16_t, event id, may be used by front * type - uint8_t, type of the event * * * Page flip complete event - event from back to front on page flip completed: * 0 1 2 3 octet * +----------------+----------------+----------------+----------------+ * | id | _EVT_PG_FLIP | reserved | 4 * +----------------+----------------+----------------+----------------+ * | reserved | 8 * +----------------+----------------+----------------+----------------+ * | fb_cookie low 32-bit | 12 * +----------------+----------------+----------------+----------------+ * | fb_cookie high 32-bit | 16 * +----------------+----------------+----------------+----------------+ * | reserved | 20 * +----------------+----------------+----------------+----------------+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/| * +----------------+----------------+----------------+----------------+ * | reserved | 64 * +----------------+----------------+----------------+----------------+ */ struct xendispl_pg_flip_evt { uint64_t fb_cookie; }; struct xendispl_req { uint16_t id; uint8_t operation; uint8_t reserved[5]; union { struct xendispl_dbuf_create_req dbuf_create; struct xendispl_dbuf_destroy_req dbuf_destroy; struct xendispl_fb_attach_req fb_attach; struct xendispl_fb_detach_req fb_detach; struct xendispl_set_config_req set_config; struct xendispl_page_flip_req pg_flip; uint8_t reserved[56]; } op; }; struct xendispl_resp { uint16_t id; uint8_t operation; uint8_t reserved; int32_t status; uint8_t reserved1[56]; }; struct xendispl_evt { uint16_t id; uint8_t type; uint8_t reserved[5]; union { struct xendispl_pg_flip_evt pg_flip; uint8_t reserved[56]; } op; }; DEFINE_RING_TYPES(xen_displif, struct xendispl_req, struct xendispl_resp); /* ****************************************************************************** * Back to front events delivery ****************************************************************************** * In order to deliver asynchronous events from back to front a shared page is * allocated by front and its granted reference propagated to back via * XenStore entries (evt-ring-ref/evt-event-channel). * This page has a common header used by both front and back to synchronize * access and control event's ring buffer, while back being a producer of the * events and front being a consumer. The rest of the page after the header * is used for event packets. * * Upon reception of an event(s) front may confirm its reception * for either each event, group of events or none. */ struct xendispl_event_page { uint32_t in_cons; uint32_t in_prod; uint8_t reserved[56]; }; #define XENDISPL_EVENT_PAGE_SIZE XEN_PAGE_SIZE #define XENDISPL_IN_RING_OFFS (sizeof(struct xendispl_event_page)) #define XENDISPL_IN_RING_SIZE (XENDISPL_EVENT_PAGE_SIZE - XENDISPL_IN_RING_OFFS) #define XENDISPL_IN_RING_LEN (XENDISPL_IN_RING_SIZE / sizeof(struct xendispl_evt)) #define XENDISPL_IN_RING(page) \ ((struct xendispl_evt *)((char *)(page) + XENDISPL_IN_RING_OFFS)) #define XENDISPL_IN_RING_REF(page, idx) \ (XENDISPL_IN_RING((page))[(idx) % XENDISPL_IN_RING_LEN]) #endif /* __XEN_PUBLIC_IO_DISPLIF_H__ */ interface/nmi.h 0000644 00000003012 14722073410 0007434 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /****************************************************************************** * nmi.h * * NMI callback registration and reason codes. * * Copyright (c) 2005, Keir Fraser <keir@xensource.com> */ #ifndef __XEN_PUBLIC_NMI_H__ #define __XEN_PUBLIC_NMI_H__ #include <xen/interface/xen.h> /* * NMI reason codes: * Currently these are x86-specific, stored in arch_shared_info.nmi_reason. */ /* I/O-check error reported via ISA port 0x61, bit 6. */ #define _XEN_NMIREASON_io_error 0 #define XEN_NMIREASON_io_error (1UL << _XEN_NMIREASON_io_error) /* PCI SERR reported via ISA port 0x61, bit 7. */ #define _XEN_NMIREASON_pci_serr 1 #define XEN_NMIREASON_pci_serr (1UL << _XEN_NMIREASON_pci_serr) /* Unknown hardware-generated NMI. */ #define _XEN_NMIREASON_unknown 2 #define XEN_NMIREASON_unknown (1UL << _XEN_NMIREASON_unknown) /* * long nmi_op(unsigned int cmd, void *arg) * NB. All ops return zero on success, else a negative error code. */ /* * Register NMI callback for this (calling) VCPU. Currently this only makes * sense for domain 0, vcpu 0. All other callers will be returned EINVAL. * arg == pointer to xennmi_callback structure. */ #define XENNMI_register_callback 0 struct xennmi_callback { unsigned long handler_address; unsigned long pad; }; DEFINE_GUEST_HANDLE_STRUCT(xennmi_callback); /* * Deregister NMI callback for this (calling) VCPU. * arg == NULL. */ #define XENNMI_unregister_callback 1 #endif /* __XEN_PUBLIC_NMI_H__ */ interface/memory.h 0000644 00000025257 14722073410 0010200 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /****************************************************************************** * memory.h * * Memory reservation and information. * * Copyright (c) 2005, Keir Fraser <keir@xensource.com> */ #ifndef __XEN_PUBLIC_MEMORY_H__ #define __XEN_PUBLIC_MEMORY_H__ #include <linux/spinlock.h> /* * Increase or decrease the specified domain's memory reservation. Returns a * -ve errcode on failure, or the # extents successfully allocated or freed. * arg == addr of struct xen_memory_reservation. */ #define XENMEM_increase_reservation 0 #define XENMEM_decrease_reservation 1 #define XENMEM_populate_physmap 6 struct xen_memory_reservation { /* * XENMEM_increase_reservation: * OUT: MFN (*not* GMFN) bases of extents that were allocated * XENMEM_decrease_reservation: * IN: GMFN bases of extents to free * XENMEM_populate_physmap: * IN: GPFN bases of extents to populate with memory * OUT: GMFN bases of extents that were allocated * (NB. This command also updates the mach_to_phys translation table) */ GUEST_HANDLE(xen_pfn_t) extent_start; /* Number of extents, and size/alignment of each (2^extent_order pages). */ xen_ulong_t nr_extents; unsigned int extent_order; /* * Maximum # bits addressable by the user of the allocated region (e.g., * I/O devices often have a 32-bit limitation even in 64-bit systems). If * zero then the user has no addressing restriction. * This field is not used by XENMEM_decrease_reservation. */ unsigned int address_bits; /* * Domain whose reservation is being changed. * Unprivileged domains can specify only DOMID_SELF. */ domid_t domid; }; DEFINE_GUEST_HANDLE_STRUCT(xen_memory_reservation); /* * An atomic exchange of memory pages. If return code is zero then * @out.extent_list provides GMFNs of the newly-allocated memory. * Returns zero on complete success, otherwise a negative error code. * On complete success then always @nr_exchanged == @in.nr_extents. * On partial success @nr_exchanged indicates how much work was done. */ #define XENMEM_exchange 11 struct xen_memory_exchange { /* * [IN] Details of memory extents to be exchanged (GMFN bases). * Note that @in.address_bits is ignored and unused. */ struct xen_memory_reservation in; /* * [IN/OUT] Details of new memory extents. * We require that: * 1. @in.domid == @out.domid * 2. @in.nr_extents << @in.extent_order == * @out.nr_extents << @out.extent_order * 3. @in.extent_start and @out.extent_start lists must not overlap * 4. @out.extent_start lists GPFN bases to be populated * 5. @out.extent_start is overwritten with allocated GMFN bases */ struct xen_memory_reservation out; /* * [OUT] Number of input extents that were successfully exchanged: * 1. The first @nr_exchanged input extents were successfully * deallocated. * 2. The corresponding first entries in the output extent list correctly * indicate the GMFNs that were successfully exchanged. * 3. All other input and output extents are untouched. * 4. If not all input exents are exchanged then the return code of this * command will be non-zero. * 5. THIS FIELD MUST BE INITIALISED TO ZERO BY THE CALLER! */ xen_ulong_t nr_exchanged; }; DEFINE_GUEST_HANDLE_STRUCT(xen_memory_exchange); /* * Returns the maximum machine frame number of mapped RAM in this system. * This command always succeeds (it never returns an error code). * arg == NULL. */ #define XENMEM_maximum_ram_page 2 /* * Returns the current or maximum memory reservation, in pages, of the * specified domain (may be DOMID_SELF). Returns -ve errcode on failure. * arg == addr of domid_t. */ #define XENMEM_current_reservation 3 #define XENMEM_maximum_reservation 4 /* * Returns a list of MFN bases of 2MB extents comprising the machine_to_phys * mapping table. Architectures which do not have a m2p table do not implement * this command. * arg == addr of xen_machphys_mfn_list_t. */ #define XENMEM_machphys_mfn_list 5 struct xen_machphys_mfn_list { /* * Size of the 'extent_start' array. Fewer entries will be filled if the * machphys table is smaller than max_extents * 2MB. */ unsigned int max_extents; /* * Pointer to buffer to fill with list of extent starts. If there are * any large discontiguities in the machine address space, 2MB gaps in * the machphys table will be represented by an MFN base of zero. */ GUEST_HANDLE(xen_pfn_t) extent_start; /* * Number of extents written to the above array. This will be smaller * than 'max_extents' if the machphys table is smaller than max_e * 2MB. */ unsigned int nr_extents; }; DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mfn_list); /* * Returns the location in virtual address space of the machine_to_phys * mapping table. Architectures which do not have a m2p table, or which do not * map it by default into guest address space, do not implement this command. * arg == addr of xen_machphys_mapping_t. */ #define XENMEM_machphys_mapping 12 struct xen_machphys_mapping { xen_ulong_t v_start, v_end; /* Start and end virtual addresses. */ xen_ulong_t max_mfn; /* Maximum MFN that can be looked up. */ }; DEFINE_GUEST_HANDLE_STRUCT(xen_machphys_mapping_t); #define XENMAPSPACE_shared_info 0 /* shared info page */ #define XENMAPSPACE_grant_table 1 /* grant table page */ #define XENMAPSPACE_gmfn 2 /* GMFN */ #define XENMAPSPACE_gmfn_range 3 /* GMFN range, XENMEM_add_to_physmap only. */ #define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another dom, * XENMEM_add_to_physmap_range only. */ #define XENMAPSPACE_dev_mmio 5 /* device mmio region */ /* * Sets the GPFN at which a particular page appears in the specified guest's * pseudophysical address space. * arg == addr of xen_add_to_physmap_t. */ #define XENMEM_add_to_physmap 7 struct xen_add_to_physmap { /* Which domain to change the mapping for. */ domid_t domid; /* Number of pages to go through for gmfn_range */ uint16_t size; /* Source mapping space. */ unsigned int space; /* Index into source mapping space. */ xen_ulong_t idx; /* GPFN where the source mapping page should appear. */ xen_pfn_t gpfn; }; DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap); /*** REMOVED ***/ /*#define XENMEM_translate_gpfn_list 8*/ #define XENMEM_add_to_physmap_range 23 struct xen_add_to_physmap_range { /* IN */ /* Which domain to change the mapping for. */ domid_t domid; uint16_t space; /* => enum phys_map_space */ /* Number of pages to go through */ uint16_t size; domid_t foreign_domid; /* IFF gmfn_foreign */ /* Indexes into space being mapped. */ GUEST_HANDLE(xen_ulong_t) idxs; /* GPFN in domid where the source mapping page should appear. */ GUEST_HANDLE(xen_pfn_t) gpfns; /* OUT */ /* Per index error code. */ GUEST_HANDLE(int) errs; }; DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap_range); /* * Returns the pseudo-physical memory map as it was when the domain * was started (specified by XENMEM_set_memory_map). * arg == addr of struct xen_memory_map. */ #define XENMEM_memory_map 9 struct xen_memory_map { /* * On call the number of entries which can be stored in buffer. On * return the number of entries which have been stored in * buffer. */ unsigned int nr_entries; /* * Entries in the buffer are in the same format as returned by the * BIOS INT 0x15 EAX=0xE820 call. */ GUEST_HANDLE(void) buffer; }; DEFINE_GUEST_HANDLE_STRUCT(xen_memory_map); /* * Returns the real physical memory map. Passes the same structure as * XENMEM_memory_map. * arg == addr of struct xen_memory_map. */ #define XENMEM_machine_memory_map 10 /* * Unmaps the page appearing at a particular GPFN from the specified guest's * pseudophysical address space. * arg == addr of xen_remove_from_physmap_t. */ #define XENMEM_remove_from_physmap 15 struct xen_remove_from_physmap { /* Which domain to change the mapping for. */ domid_t domid; /* GPFN of the current mapping of the page. */ xen_pfn_t gpfn; }; DEFINE_GUEST_HANDLE_STRUCT(xen_remove_from_physmap); /* * Get the pages for a particular guest resource, so that they can be * mapped directly by a tools domain. */ #define XENMEM_acquire_resource 28 struct xen_mem_acquire_resource { /* IN - The domain whose resource is to be mapped */ domid_t domid; /* IN - the type of resource */ uint16_t type; #define XENMEM_resource_ioreq_server 0 #define XENMEM_resource_grant_table 1 /* * IN - a type-specific resource identifier, which must be zero * unless stated otherwise. * * type == XENMEM_resource_ioreq_server -> id == ioreq server id * type == XENMEM_resource_grant_table -> id defined below */ uint32_t id; #define XENMEM_resource_grant_table_id_shared 0 #define XENMEM_resource_grant_table_id_status 1 /* IN/OUT - As an IN parameter number of frames of the resource * to be mapped. However, if the specified value is 0 and * frame_list is NULL then this field will be set to the * maximum value supported by the implementation on return. */ uint32_t nr_frames; /* * OUT - Must be zero on entry. On return this may contain a bitwise * OR of the following values. */ uint32_t flags; /* The resource pages have been assigned to the calling domain */ #define _XENMEM_rsrc_acq_caller_owned 0 #define XENMEM_rsrc_acq_caller_owned (1u << _XENMEM_rsrc_acq_caller_owned) /* * IN - the index of the initial frame to be mapped. This parameter * is ignored if nr_frames is 0. */ uint64_t frame; #define XENMEM_resource_ioreq_server_frame_bufioreq 0 #define XENMEM_resource_ioreq_server_frame_ioreq(n) (1 + (n)) /* * IN/OUT - If the tools domain is PV then, upon return, frame_list * will be populated with the MFNs of the resource. * If the tools domain is HVM then it is expected that, on * entry, frame_list will be populated with a list of GFNs * that will be mapped to the MFNs of the resource. * If -EIO is returned then the frame_list has only been * partially mapped and it is up to the caller to unmap all * the GFNs. * This parameter may be NULL if nr_frames is 0. */ GUEST_HANDLE(xen_pfn_t) frame_list; }; DEFINE_GUEST_HANDLE_STRUCT(xen_mem_acquire_resource); #endif /* __XEN_PUBLIC_MEMORY_H__ */ interface/features.h 0000644 00000005431 14722073410 0010476 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /****************************************************************************** * features.h * * Feature flags, reported by XENVER_get_features. * * Copyright (c) 2006, Keir Fraser <keir@xensource.com> */ #ifndef __XEN_PUBLIC_FEATURES_H__ #define __XEN_PUBLIC_FEATURES_H__ /* * If set, the guest does not need to write-protect its pagetables, and can * update them via direct writes. */ #define XENFEAT_writable_page_tables 0 /* * If set, the guest does not need to write-protect its segment descriptor * tables, and can update them via direct writes. */ #define XENFEAT_writable_descriptor_tables 1 /* * If set, translation between the guest's 'pseudo-physical' address space * and the host's machine address space are handled by the hypervisor. In this * mode the guest does not need to perform phys-to/from-machine translations * when performing page table operations. */ #define XENFEAT_auto_translated_physmap 2 /* If set, the guest is running in supervisor mode (e.g., x86 ring 0). */ #define XENFEAT_supervisor_mode_kernel 3 /* * If set, the guest does not need to allocate x86 PAE page directories * below 4GB. This flag is usually implied by auto_translated_physmap. */ #define XENFEAT_pae_pgdir_above_4gb 4 /* x86: Does this Xen host support the MMU_PT_UPDATE_PRESERVE_AD hypercall? */ #define XENFEAT_mmu_pt_update_preserve_ad 5 /* x86: Does this Xen host support the MMU_{CLEAR,COPY}_PAGE hypercall? */ #define XENFEAT_highmem_assist 6 /* * If set, GNTTABOP_map_grant_ref honors flags to be placed into guest kernel * available pte bits. */ #define XENFEAT_gnttab_map_avail_bits 7 /* x86: Does this Xen host support the HVM callback vector type? */ #define XENFEAT_hvm_callback_vector 8 /* x86: pvclock algorithm is safe to use on HVM */ #define XENFEAT_hvm_safe_pvclock 9 /* x86: pirq can be used by HVM guests */ #define XENFEAT_hvm_pirqs 10 /* operation as Dom0 is supported */ #define XENFEAT_dom0 11 /* Xen also maps grant references at pfn = mfn. * This feature flag is deprecated and should not be used. #define XENFEAT_grant_map_identity 12 */ /* Guest can use XENMEMF_vnode to specify virtual node for memory op. */ #define XENFEAT_memory_op_vnode_supported 13 /* arm: Hypervisor supports ARM SMC calling convention. */ #define XENFEAT_ARM_SMCCC_supported 14 /* * x86/PVH: If set, ACPI RSDP can be placed at any address. Otherwise RSDP * must be located in lower 1MB, as required by ACPI Specification for IA-PC * systems. * This feature flag is only consulted if XEN_ELFNOTE_GUEST_OS contains * the "linux" string. */ #define XENFEAT_linux_rsdp_unrestricted 15 #define XENFEAT_NR_SUBMAPS 1 #endif /* __XEN_PUBLIC_FEATURES_H__ */ interface/xen.h 0000644 00000075633 14722073410 0007465 0 ustar 00 /****************************************************************************** * xen.h * * Guest OS interface to Xen. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (c) 2004, K A Fraser */ #ifndef __XEN_PUBLIC_XEN_H__ #define __XEN_PUBLIC_XEN_H__ #include <asm/xen/interface.h> /* * XEN "SYSTEM CALLS" (a.k.a. HYPERCALLS). */ /* * x86_32: EAX = vector; EBX, ECX, EDX, ESI, EDI = args 1, 2, 3, 4, 5. * EAX = return value * (argument registers may be clobbered on return) * x86_64: RAX = vector; RDI, RSI, RDX, R10, R8, R9 = args 1, 2, 3, 4, 5, 6. * RAX = return value * (argument registers not clobbered on return; RCX, R11 are) */ #define __HYPERVISOR_set_trap_table 0 #define __HYPERVISOR_mmu_update 1 #define __HYPERVISOR_set_gdt 2 #define __HYPERVISOR_stack_switch 3 #define __HYPERVISOR_set_callbacks 4 #define __HYPERVISOR_fpu_taskswitch 5 #define __HYPERVISOR_sched_op_compat 6 #define __HYPERVISOR_platform_op 7 #define __HYPERVISOR_set_debugreg 8 #define __HYPERVISOR_get_debugreg 9 #define __HYPERVISOR_update_descriptor 10 #define __HYPERVISOR_memory_op 12 #define __HYPERVISOR_multicall 13 #define __HYPERVISOR_update_va_mapping 14 #define __HYPERVISOR_set_timer_op 15 #define __HYPERVISOR_event_channel_op_compat 16 #define __HYPERVISOR_xen_version 17 #define __HYPERVISOR_console_io 18 #define __HYPERVISOR_physdev_op_compat 19 #define __HYPERVISOR_grant_table_op 20 #define __HYPERVISOR_vm_assist 21 #define __HYPERVISOR_update_va_mapping_otherdomain 22 #define __HYPERVISOR_iret 23 /* x86 only */ #define __HYPERVISOR_vcpu_op 24 #define __HYPERVISOR_set_segment_base 25 /* x86/64 only */ #define __HYPERVISOR_mmuext_op 26 #define __HYPERVISOR_xsm_op 27 #define __HYPERVISOR_nmi_op 28 #define __HYPERVISOR_sched_op 29 #define __HYPERVISOR_callback_op 30 #define __HYPERVISOR_xenoprof_op 31 #define __HYPERVISOR_event_channel_op 32 #define __HYPERVISOR_physdev_op 33 #define __HYPERVISOR_hvm_op 34 #define __HYPERVISOR_sysctl 35 #define __HYPERVISOR_domctl 36 #define __HYPERVISOR_kexec_op 37 #define __HYPERVISOR_tmem_op 38 #define __HYPERVISOR_xc_reserved_op 39 /* reserved for XenClient */ #define __HYPERVISOR_xenpmu_op 40 #define __HYPERVISOR_dm_op 41 /* Architecture-specific hypercall definitions. */ #define __HYPERVISOR_arch_0 48 #define __HYPERVISOR_arch_1 49 #define __HYPERVISOR_arch_2 50 #define __HYPERVISOR_arch_3 51 #define __HYPERVISOR_arch_4 52 #define __HYPERVISOR_arch_5 53 #define __HYPERVISOR_arch_6 54 #define __HYPERVISOR_arch_7 55 /* * VIRTUAL INTERRUPTS * * Virtual interrupts that a guest OS may receive from Xen. * In the side comments, 'V.' denotes a per-VCPU VIRQ while 'G.' denotes a * global VIRQ. The former can be bound once per VCPU and cannot be re-bound. * The latter can be allocated only once per guest: they must initially be * allocated to VCPU0 but can subsequently be re-bound. */ #define VIRQ_TIMER 0 /* V. Timebase update, and/or requested timeout. */ #define VIRQ_DEBUG 1 /* V. Request guest to dump debug info. */ #define VIRQ_CONSOLE 2 /* G. (DOM0) Bytes received on emergency console. */ #define VIRQ_DOM_EXC 3 /* G. (DOM0) Exceptional event for some domain. */ #define VIRQ_TBUF 4 /* G. (DOM0) Trace buffer has records available. */ #define VIRQ_DEBUGGER 6 /* G. (DOM0) A domain has paused for debugging. */ #define VIRQ_XENOPROF 7 /* V. XenOprofile interrupt: new sample available */ #define VIRQ_CON_RING 8 /* G. (DOM0) Bytes received on console */ #define VIRQ_PCPU_STATE 9 /* G. (DOM0) PCPU state changed */ #define VIRQ_MEM_EVENT 10 /* G. (DOM0) A memory event has occured */ #define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient */ #define VIRQ_ENOMEM 12 /* G. (DOM0) Low on heap memory */ #define VIRQ_XENPMU 13 /* PMC interrupt */ /* Architecture-specific VIRQ definitions. */ #define VIRQ_ARCH_0 16 #define VIRQ_ARCH_1 17 #define VIRQ_ARCH_2 18 #define VIRQ_ARCH_3 19 #define VIRQ_ARCH_4 20 #define VIRQ_ARCH_5 21 #define VIRQ_ARCH_6 22 #define VIRQ_ARCH_7 23 #define NR_VIRQS 24 /* * enum neg_errnoval HYPERVISOR_mmu_update(const struct mmu_update reqs[], * unsigned count, unsigned *done_out, * unsigned foreigndom) * @reqs is an array of mmu_update_t structures ((ptr, val) pairs). * @count is the length of the above array. * @pdone is an output parameter indicating number of completed operations * @foreigndom[15:0]: FD, the expected owner of data pages referenced in this * hypercall invocation. Can be DOMID_SELF. * @foreigndom[31:16]: PFD, the expected owner of pagetable pages referenced * in this hypercall invocation. The value of this field * (x) encodes the PFD as follows: * x == 0 => PFD == DOMID_SELF * x != 0 => PFD == x - 1 * * Sub-commands: ptr[1:0] specifies the appropriate MMU_* command. * ------------- * ptr[1:0] == MMU_NORMAL_PT_UPDATE: * Updates an entry in a page table belonging to PFD. If updating an L1 table, * and the new table entry is valid/present, the mapped frame must belong to * FD. If attempting to map an I/O page then the caller assumes the privilege * of the FD. * FD == DOMID_IO: Permit /only/ I/O mappings, at the priv level of the caller. * FD == DOMID_XEN: Map restricted areas of Xen's heap space. * ptr[:2] -- Machine address of the page-table entry to modify. * val -- Value to write. * * There also certain implicit requirements when using this hypercall. The * pages that make up a pagetable must be mapped read-only in the guest. * This prevents uncontrolled guest updates to the pagetable. Xen strictly * enforces this, and will disallow any pagetable update which will end up * mapping pagetable page RW, and will disallow using any writable page as a * pagetable. In practice it means that when constructing a page table for a * process, thread, etc, we MUST be very dilligient in following these rules: * 1). Start with top-level page (PGD or in Xen language: L4). Fill out * the entries. * 2). Keep on going, filling out the upper (PUD or L3), and middle (PMD * or L2). * 3). Start filling out the PTE table (L1) with the PTE entries. Once * done, make sure to set each of those entries to RO (so writeable bit * is unset). Once that has been completed, set the PMD (L2) for this * PTE table as RO. * 4). When completed with all of the PMD (L2) entries, and all of them have * been set to RO, make sure to set RO the PUD (L3). Do the same * operation on PGD (L4) pagetable entries that have a PUD (L3) entry. * 5). Now before you can use those pages (so setting the cr3), you MUST also * pin them so that the hypervisor can verify the entries. This is done * via the HYPERVISOR_mmuext_op(MMUEXT_PIN_L4_TABLE, guest physical frame * number of the PGD (L4)). And this point the HYPERVISOR_mmuext_op( * MMUEXT_NEW_BASEPTR, guest physical frame number of the PGD (L4)) can be * issued. * For 32-bit guests, the L4 is not used (as there is less pagetables), so * instead use L3. * At this point the pagetables can be modified using the MMU_NORMAL_PT_UPDATE * hypercall. Also if so desired the OS can also try to write to the PTE * and be trapped by the hypervisor (as the PTE entry is RO). * * To deallocate the pages, the operations are the reverse of the steps * mentioned above. The argument is MMUEXT_UNPIN_TABLE for all levels and the * pagetable MUST not be in use (meaning that the cr3 is not set to it). * * ptr[1:0] == MMU_MACHPHYS_UPDATE: * Updates an entry in the machine->pseudo-physical mapping table. * ptr[:2] -- Machine address within the frame whose mapping to modify. * The frame must belong to the FD, if one is specified. * val -- Value to write into the mapping entry. * * ptr[1:0] == MMU_PT_UPDATE_PRESERVE_AD: * As MMU_NORMAL_PT_UPDATE above, but A/D bits currently in the PTE are ORed * with those in @val. * * @val is usually the machine frame number along with some attributes. * The attributes by default follow the architecture defined bits. Meaning that * if this is a X86_64 machine and four page table layout is used, the layout * of val is: * - 63 if set means No execute (NX) * - 46-13 the machine frame number * - 12 available for guest * - 11 available for guest * - 10 available for guest * - 9 available for guest * - 8 global * - 7 PAT (PSE is disabled, must use hypercall to make 4MB or 2MB pages) * - 6 dirty * - 5 accessed * - 4 page cached disabled * - 3 page write through * - 2 userspace accessible * - 1 writeable * - 0 present * * The one bits that does not fit with the default layout is the PAGE_PSE * also called PAGE_PAT). The MMUEXT_[UN]MARK_SUPER arguments to the * HYPERVISOR_mmuext_op serve as mechanism to set a pagetable to be 4MB * (or 2MB) instead of using the PAGE_PSE bit. * * The reason that the PAGE_PSE (bit 7) is not being utilized is due to Xen * using it as the Page Attribute Table (PAT) bit - for details on it please * refer to Intel SDM 10.12. The PAT allows to set the caching attributes of * pages instead of using MTRRs. * * The PAT MSR is as follows (it is a 64-bit value, each entry is 8 bits): * PAT4 PAT0 * +-----+-----+----+----+----+-----+----+----+ * | UC | UC- | WC | WB | UC | UC- | WC | WB | <= Linux * +-----+-----+----+----+----+-----+----+----+ * | UC | UC- | WT | WB | UC | UC- | WT | WB | <= BIOS (default when machine boots) * +-----+-----+----+----+----+-----+----+----+ * | rsv | rsv | WP | WC | UC | UC- | WT | WB | <= Xen * +-----+-----+----+----+----+-----+----+----+ * * The lookup of this index table translates to looking up * Bit 7, Bit 4, and Bit 3 of val entry: * * PAT/PSE (bit 7) ... PCD (bit 4) .. PWT (bit 3). * * If all bits are off, then we are using PAT0. If bit 3 turned on, * then we are using PAT1, if bit 3 and bit 4, then PAT2.. * * As you can see, the Linux PAT1 translates to PAT4 under Xen. Which means * that if a guest that follows Linux's PAT setup and would like to set Write * Combined on pages it MUST use PAT4 entry. Meaning that Bit 7 (PAGE_PAT) is * set. For example, under Linux it only uses PAT0, PAT1, and PAT2 for the * caching as: * * WB = none (so PAT0) * WC = PWT (bit 3 on) * UC = PWT | PCD (bit 3 and 4 are on). * * To make it work with Xen, it needs to translate the WC bit as so: * * PWT (so bit 3 on) --> PAT (so bit 7 is on) and clear bit 3 * * And to translate back it would: * * PAT (bit 7 on) --> PWT (bit 3 on) and clear bit 7. */ #define MMU_NORMAL_PT_UPDATE 0 /* checked '*ptr = val'. ptr is MA. */ #define MMU_MACHPHYS_UPDATE 1 /* ptr = MA of frame to modify entry for */ #define MMU_PT_UPDATE_PRESERVE_AD 2 /* atomically: *ptr = val | (*ptr&(A|D)) */ #define MMU_PT_UPDATE_NO_TRANSLATE 3 /* checked '*ptr = val'. ptr is MA. */ /* * MMU EXTENDED OPERATIONS * * enum neg_errnoval HYPERVISOR_mmuext_op(mmuext_op_t uops[], * unsigned int count, * unsigned int *pdone, * unsigned int foreigndom) */ /* HYPERVISOR_mmuext_op() accepts a list of mmuext_op structures. * A foreigndom (FD) can be specified (or DOMID_SELF for none). * Where the FD has some effect, it is described below. * * cmd: MMUEXT_(UN)PIN_*_TABLE * mfn: Machine frame number to be (un)pinned as a p.t. page. * The frame must belong to the FD, if one is specified. * * cmd: MMUEXT_NEW_BASEPTR * mfn: Machine frame number of new page-table base to install in MMU. * * cmd: MMUEXT_NEW_USER_BASEPTR [x86/64 only] * mfn: Machine frame number of new page-table base to install in MMU * when in user space. * * cmd: MMUEXT_TLB_FLUSH_LOCAL * No additional arguments. Flushes local TLB. * * cmd: MMUEXT_INVLPG_LOCAL * linear_addr: Linear address to be flushed from the local TLB. * * cmd: MMUEXT_TLB_FLUSH_MULTI * vcpumask: Pointer to bitmap of VCPUs to be flushed. * * cmd: MMUEXT_INVLPG_MULTI * linear_addr: Linear address to be flushed. * vcpumask: Pointer to bitmap of VCPUs to be flushed. * * cmd: MMUEXT_TLB_FLUSH_ALL * No additional arguments. Flushes all VCPUs' TLBs. * * cmd: MMUEXT_INVLPG_ALL * linear_addr: Linear address to be flushed from all VCPUs' TLBs. * * cmd: MMUEXT_FLUSH_CACHE * No additional arguments. Writes back and flushes cache contents. * * cmd: MMUEXT_FLUSH_CACHE_GLOBAL * No additional arguments. Writes back and flushes cache contents * on all CPUs in the system. * * cmd: MMUEXT_SET_LDT * linear_addr: Linear address of LDT base (NB. must be page-aligned). * nr_ents: Number of entries in LDT. * * cmd: MMUEXT_CLEAR_PAGE * mfn: Machine frame number to be cleared. * * cmd: MMUEXT_COPY_PAGE * mfn: Machine frame number of the destination page. * src_mfn: Machine frame number of the source page. * * cmd: MMUEXT_[UN]MARK_SUPER * mfn: Machine frame number of head of superpage to be [un]marked. */ #define MMUEXT_PIN_L1_TABLE 0 #define MMUEXT_PIN_L2_TABLE 1 #define MMUEXT_PIN_L3_TABLE 2 #define MMUEXT_PIN_L4_TABLE 3 #define MMUEXT_UNPIN_TABLE 4 #define MMUEXT_NEW_BASEPTR 5 #define MMUEXT_TLB_FLUSH_LOCAL 6 #define MMUEXT_INVLPG_LOCAL 7 #define MMUEXT_TLB_FLUSH_MULTI 8 #define MMUEXT_INVLPG_MULTI 9 #define MMUEXT_TLB_FLUSH_ALL 10 #define MMUEXT_INVLPG_ALL 11 #define MMUEXT_FLUSH_CACHE 12 #define MMUEXT_SET_LDT 13 #define MMUEXT_NEW_USER_BASEPTR 15 #define MMUEXT_CLEAR_PAGE 16 #define MMUEXT_COPY_PAGE 17 #define MMUEXT_FLUSH_CACHE_GLOBAL 18 #define MMUEXT_MARK_SUPER 19 #define MMUEXT_UNMARK_SUPER 20 #ifndef __ASSEMBLY__ struct mmuext_op { unsigned int cmd; union { /* [UN]PIN_TABLE, NEW_BASEPTR, NEW_USER_BASEPTR * CLEAR_PAGE, COPY_PAGE, [UN]MARK_SUPER */ xen_pfn_t mfn; /* INVLPG_LOCAL, INVLPG_ALL, SET_LDT */ unsigned long linear_addr; } arg1; union { /* SET_LDT */ unsigned int nr_ents; /* TLB_FLUSH_MULTI, INVLPG_MULTI */ void *vcpumask; /* COPY_PAGE */ xen_pfn_t src_mfn; } arg2; }; DEFINE_GUEST_HANDLE_STRUCT(mmuext_op); #endif /* These are passed as 'flags' to update_va_mapping. They can be ORed. */ /* When specifying UVMF_MULTI, also OR in a pointer to a CPU bitmap. */ /* UVMF_LOCAL is merely UVMF_MULTI with a NULL bitmap pointer. */ #define UVMF_NONE (0UL<<0) /* No flushing at all. */ #define UVMF_TLB_FLUSH (1UL<<0) /* Flush entire TLB(s). */ #define UVMF_INVLPG (2UL<<0) /* Flush only one entry. */ #define UVMF_FLUSHTYPE_MASK (3UL<<0) #define UVMF_MULTI (0UL<<2) /* Flush subset of TLBs. */ #define UVMF_LOCAL (0UL<<2) /* Flush local TLB. */ #define UVMF_ALL (1UL<<2) /* Flush all TLBs. */ /* * Commands to HYPERVISOR_console_io(). */ #define CONSOLEIO_write 0 #define CONSOLEIO_read 1 /* * Commands to HYPERVISOR_vm_assist(). */ #define VMASST_CMD_enable 0 #define VMASST_CMD_disable 1 /* x86/32 guests: simulate full 4GB segment limits. */ #define VMASST_TYPE_4gb_segments 0 /* x86/32 guests: trap (vector 15) whenever above vmassist is used. */ #define VMASST_TYPE_4gb_segments_notify 1 /* * x86 guests: support writes to bottom-level PTEs. * NB1. Page-directory entries cannot be written. * NB2. Guest must continue to remove all writable mappings of PTEs. */ #define VMASST_TYPE_writable_pagetables 2 /* x86/PAE guests: support PDPTs above 4GB. */ #define VMASST_TYPE_pae_extended_cr3 3 /* * x86 guests: Sane behaviour for virtual iopl * - virtual iopl updated from do_iret() hypercalls. * - virtual iopl reported in bounce frames. * - guest kernels assumed to be level 0 for the purpose of iopl checks. */ #define VMASST_TYPE_architectural_iopl 4 /* * All guests: activate update indicator in vcpu_runstate_info * Enable setting the XEN_RUNSTATE_UPDATE flag in guest memory mapped * vcpu_runstate_info during updates of the runstate information. */ #define VMASST_TYPE_runstate_update_flag 5 #define MAX_VMASST_TYPE 5 #ifndef __ASSEMBLY__ typedef uint16_t domid_t; /* Domain ids >= DOMID_FIRST_RESERVED cannot be used for ordinary domains. */ #define DOMID_FIRST_RESERVED (0x7FF0U) /* DOMID_SELF is used in certain contexts to refer to oneself. */ #define DOMID_SELF (0x7FF0U) /* * DOMID_IO is used to restrict page-table updates to mapping I/O memory. * Although no Foreign Domain need be specified to map I/O pages, DOMID_IO * is useful to ensure that no mappings to the OS's own heap are accidentally * installed. (e.g., in Linux this could cause havoc as reference counts * aren't adjusted on the I/O-mapping code path). * This only makes sense in MMUEXT_SET_FOREIGNDOM, but in that context can * be specified by any calling domain. */ #define DOMID_IO (0x7FF1U) /* * DOMID_XEN is used to allow privileged domains to map restricted parts of * Xen's heap space (e.g., the machine_to_phys table). * This only makes sense in MMUEXT_SET_FOREIGNDOM, and is only permitted if * the caller is privileged. */ #define DOMID_XEN (0x7FF2U) /* DOMID_COW is used as the owner of sharable pages */ #define DOMID_COW (0x7FF3U) /* DOMID_INVALID is used to identify pages with unknown owner. */ #define DOMID_INVALID (0x7FF4U) /* Idle domain. */ #define DOMID_IDLE (0x7FFFU) /* * Send an array of these to HYPERVISOR_mmu_update(). * NB. The fields are natural pointer/address size for this architecture. */ struct mmu_update { uint64_t ptr; /* Machine address of PTE. */ uint64_t val; /* New contents of PTE. */ }; DEFINE_GUEST_HANDLE_STRUCT(mmu_update); /* * Send an array of these to HYPERVISOR_multicall(). * NB. The fields are logically the natural register size for this * architecture. In cases where xen_ulong_t is larger than this then * any unused bits in the upper portion must be zero. */ struct multicall_entry { xen_ulong_t op; xen_long_t result; xen_ulong_t args[6]; }; DEFINE_GUEST_HANDLE_STRUCT(multicall_entry); struct vcpu_time_info { /* * Updates to the following values are preceded and followed * by an increment of 'version'. The guest can therefore * detect updates by looking for changes to 'version'. If the * least-significant bit of the version number is set then an * update is in progress and the guest must wait to read a * consistent set of values. The correct way to interact with * the version number is similar to Linux's seqlock: see the * implementations of read_seqbegin/read_seqretry. */ uint32_t version; uint32_t pad0; uint64_t tsc_timestamp; /* TSC at last update of time vals. */ uint64_t system_time; /* Time, in nanosecs, since boot. */ /* * Current system time: * system_time + ((tsc - tsc_timestamp) << tsc_shift) * tsc_to_system_mul * CPU frequency (Hz): * ((10^9 << 32) / tsc_to_system_mul) >> tsc_shift */ uint32_t tsc_to_system_mul; int8_t tsc_shift; int8_t pad1[3]; }; /* 32 bytes */ struct vcpu_info { /* * 'evtchn_upcall_pending' is written non-zero by Xen to indicate * a pending notification for a particular VCPU. It is then cleared * by the guest OS /before/ checking for pending work, thus avoiding * a set-and-check race. Note that the mask is only accessed by Xen * on the CPU that is currently hosting the VCPU. This means that the * pending and mask flags can be updated by the guest without special * synchronisation (i.e., no need for the x86 LOCK prefix). * This may seem suboptimal because if the pending flag is set by * a different CPU then an IPI may be scheduled even when the mask * is set. However, note: * 1. The task of 'interrupt holdoff' is covered by the per-event- * channel mask bits. A 'noisy' event that is continually being * triggered can be masked at source at this very precise * granularity. * 2. The main purpose of the per-VCPU mask is therefore to restrict * reentrant execution: whether for concurrency control, or to * prevent unbounded stack usage. Whatever the purpose, we expect * that the mask will be asserted only for short periods at a time, * and so the likelihood of a 'spurious' IPI is suitably small. * The mask is read before making an event upcall to the guest: a * non-zero mask therefore guarantees that the VCPU will not receive * an upcall activation. The mask is cleared when the VCPU requests * to block: this avoids wakeup-waiting races. */ uint8_t evtchn_upcall_pending; uint8_t evtchn_upcall_mask; xen_ulong_t evtchn_pending_sel; struct arch_vcpu_info arch; struct pvclock_vcpu_time_info time; }; /* 64 bytes (x86) */ /* * Xen/kernel shared data -- pointer provided in start_info. * NB. We expect that this struct is smaller than a page. */ struct shared_info { struct vcpu_info vcpu_info[MAX_VIRT_CPUS]; /* * A domain can create "event channels" on which it can send and receive * asynchronous event notifications. There are three classes of event that * are delivered by this mechanism: * 1. Bi-directional inter- and intra-domain connections. Domains must * arrange out-of-band to set up a connection (usually by allocating * an unbound 'listener' port and avertising that via a storage service * such as xenstore). * 2. Physical interrupts. A domain with suitable hardware-access * privileges can bind an event-channel port to a physical interrupt * source. * 3. Virtual interrupts ('events'). A domain can bind an event-channel * port to a virtual interrupt source, such as the virtual-timer * device or the emergency console. * * Event channels are addressed by a "port index". Each channel is * associated with two bits of information: * 1. PENDING -- notifies the domain that there is a pending notification * to be processed. This bit is cleared by the guest. * 2. MASK -- if this bit is clear then a 0->1 transition of PENDING * will cause an asynchronous upcall to be scheduled. This bit is only * updated by the guest. It is read-only within Xen. If a channel * becomes pending while the channel is masked then the 'edge' is lost * (i.e., when the channel is unmasked, the guest must manually handle * pending notifications as no upcall will be scheduled by Xen). * * To expedite scanning of pending notifications, any 0->1 pending * transition on an unmasked channel causes a corresponding bit in a * per-vcpu selector word to be set. Each bit in the selector covers a * 'C long' in the PENDING bitfield array. */ xen_ulong_t evtchn_pending[sizeof(xen_ulong_t) * 8]; xen_ulong_t evtchn_mask[sizeof(xen_ulong_t) * 8]; /* * Wallclock time: updated only by control software. Guests should base * their gettimeofday() syscall on this wallclock-base value. */ struct pvclock_wall_clock wc; struct arch_shared_info arch; }; /* * Start-of-day memory layout * * 1. The domain is started within contiguous virtual-memory region. * 2. The contiguous region begins and ends on an aligned 4MB boundary. * 3. This the order of bootstrap elements in the initial virtual region: * a. relocated kernel image * b. initial ram disk [mod_start, mod_len] * (may be omitted) * c. list of allocated page frames [mfn_list, nr_pages] * (unless relocated due to XEN_ELFNOTE_INIT_P2M) * d. start_info_t structure [register ESI (x86)] * in case of dom0 this page contains the console info, too * e. unless dom0: xenstore ring page * f. unless dom0: console ring page * g. bootstrap page tables [pt_base, CR3 (x86)] * h. bootstrap stack [register ESP (x86)] * 4. Bootstrap elements are packed together, but each is 4kB-aligned. * 5. The list of page frames forms a contiguous 'pseudo-physical' memory * layout for the domain. In particular, the bootstrap virtual-memory * region is a 1:1 mapping to the first section of the pseudo-physical map. * 6. All bootstrap elements are mapped read-writable for the guest OS. The * only exception is the bootstrap page table, which is mapped read-only. * 7. There is guaranteed to be at least 512kB padding after the final * bootstrap element. If necessary, the bootstrap virtual region is * extended by an extra 4MB to ensure this. */ #define MAX_GUEST_CMDLINE 1024 struct start_info { /* THE FOLLOWING ARE FILLED IN BOTH ON INITIAL BOOT AND ON RESUME. */ char magic[32]; /* "xen-<version>-<platform>". */ unsigned long nr_pages; /* Total pages allocated to this domain. */ unsigned long shared_info; /* MACHINE address of shared info struct. */ uint32_t flags; /* SIF_xxx flags. */ xen_pfn_t store_mfn; /* MACHINE page number of shared page. */ uint32_t store_evtchn; /* Event channel for store communication. */ union { struct { xen_pfn_t mfn; /* MACHINE page number of console page. */ uint32_t evtchn; /* Event channel for console page. */ } domU; struct { uint32_t info_off; /* Offset of console_info struct. */ uint32_t info_size; /* Size of console_info struct from start.*/ } dom0; } console; /* THE FOLLOWING ARE ONLY FILLED IN ON INITIAL BOOT (NOT RESUME). */ unsigned long pt_base; /* VIRTUAL address of page directory. */ unsigned long nr_pt_frames; /* Number of bootstrap p.t. frames. */ unsigned long mfn_list; /* VIRTUAL address of page-frame list. */ unsigned long mod_start; /* VIRTUAL address of pre-loaded module. */ unsigned long mod_len; /* Size (bytes) of pre-loaded module. */ int8_t cmd_line[MAX_GUEST_CMDLINE]; /* The pfn range here covers both page table and p->m table frames. */ unsigned long first_p2m_pfn;/* 1st pfn forming initial P->M table. */ unsigned long nr_p2m_frames;/* # of pfns forming initial P->M table. */ }; /* These flags are passed in the 'flags' field of start_info_t. */ #define SIF_PRIVILEGED (1<<0) /* Is the domain privileged? */ #define SIF_INITDOMAIN (1<<1) /* Is this the initial control domain? */ #define SIF_MULTIBOOT_MOD (1<<2) /* Is mod_start a multiboot module? */ #define SIF_MOD_START_PFN (1<<3) /* Is mod_start a PFN? */ #define SIF_VIRT_P2M_4TOOLS (1<<4) /* Do Xen tools understand a virt. mapped */ /* P->M making the 3 level tree obsolete? */ #define SIF_PM_MASK (0xFF<<8) /* reserve 1 byte for xen-pm options */ /* * A multiboot module is a package containing modules very similar to a * multiboot module array. The only differences are: * - the array of module descriptors is by convention simply at the beginning * of the multiboot module, * - addresses in the module descriptors are based on the beginning of the * multiboot module, * - the number of modules is determined by a termination descriptor that has * mod_start == 0. * * This permits to both build it statically and reference it in a configuration * file, and let the PV guest easily rebase the addresses to virtual addresses * and at the same time count the number of modules. */ struct xen_multiboot_mod_list { /* Address of first byte of the module */ uint32_t mod_start; /* Address of last byte of the module (inclusive) */ uint32_t mod_end; /* Address of zero-terminated command line */ uint32_t cmdline; /* Unused, must be zero */ uint32_t pad; }; /* * The console structure in start_info.console.dom0 * * This structure includes a variety of information required to * have a working VGA/VESA console. */ struct dom0_vga_console_info { uint8_t video_type; #define XEN_VGATYPE_TEXT_MODE_3 0x03 #define XEN_VGATYPE_VESA_LFB 0x23 #define XEN_VGATYPE_EFI_LFB 0x70 union { struct { /* Font height, in pixels. */ uint16_t font_height; /* Cursor location (column, row). */ uint16_t cursor_x, cursor_y; /* Number of rows and columns (dimensions in characters). */ uint16_t rows, columns; } text_mode_3; struct { /* Width and height, in pixels. */ uint16_t width, height; /* Bytes per scan line. */ uint16_t bytes_per_line; /* Bits per pixel. */ uint16_t bits_per_pixel; /* LFB physical address, and size (in units of 64kB). */ uint32_t lfb_base; uint32_t lfb_size; /* RGB mask offsets and sizes, as defined by VBE 1.2+ */ uint8_t red_pos, red_size; uint8_t green_pos, green_size; uint8_t blue_pos, blue_size; uint8_t rsvd_pos, rsvd_size; /* VESA capabilities (offset 0xa, VESA command 0x4f00). */ uint32_t gbl_caps; /* Mode attributes (offset 0x0, VESA command 0x4f01). */ uint16_t mode_attrs; } vesa_lfb; } u; }; typedef uint64_t cpumap_t; typedef uint8_t xen_domain_handle_t[16]; /* Turn a plain number into a C unsigned long constant. */ #define __mk_unsigned_long(x) x ## UL #define mk_unsigned_long(x) __mk_unsigned_long(x) #define TMEM_SPEC_VERSION 1 struct tmem_op { uint32_t cmd; int32_t pool_id; union { struct { /* for cmd == TMEM_NEW_POOL */ uint64_t uuid[2]; uint32_t flags; } new; struct { uint64_t oid[3]; uint32_t index; uint32_t tmem_offset; uint32_t pfn_offset; uint32_t len; GUEST_HANDLE(void) gmfn; /* guest machine page frame */ } gen; } u; }; DEFINE_GUEST_HANDLE(u64); #else /* __ASSEMBLY__ */ /* In assembly code we cannot use C numeric constant suffixes. */ #define mk_unsigned_long(x) x #endif /* !__ASSEMBLY__ */ #endif /* __XEN_PUBLIC_XEN_H__ */ interface/callback.h 0000644 00000006726 14722073410 0010424 0 ustar 00 /****************************************************************************** * callback.h * * Register guest OS callbacks with Xen. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (c) 2006, Ian Campbell */ #ifndef __XEN_PUBLIC_CALLBACK_H__ #define __XEN_PUBLIC_CALLBACK_H__ #include <xen/interface/xen.h> /* * Prototype for this hypercall is: * long callback_op(int cmd, void *extra_args) * @cmd == CALLBACKOP_??? (callback operation). * @extra_args == Operation-specific extra arguments (NULL if none). */ /* x86: Callback for event delivery. */ #define CALLBACKTYPE_event 0 /* x86: Failsafe callback when guest state cannot be restored by Xen. */ #define CALLBACKTYPE_failsafe 1 /* x86/64 hypervisor: Syscall by 64-bit guest app ('64-on-64-on-64'). */ #define CALLBACKTYPE_syscall 2 /* * x86/32 hypervisor: Only available on x86/32 when supervisor_mode_kernel * feature is enabled. Do not use this callback type in new code. */ #define CALLBACKTYPE_sysenter_deprecated 3 /* x86: Callback for NMI delivery. */ #define CALLBACKTYPE_nmi 4 /* * x86: sysenter is only available as follows: * - 32-bit hypervisor: with the supervisor_mode_kernel feature enabled * - 64-bit hypervisor: 32-bit guest applications on Intel CPUs * ('32-on-32-on-64', '32-on-64-on-64') * [nb. also 64-bit guest applications on Intel CPUs * ('64-on-64-on-64'), but syscall is preferred] */ #define CALLBACKTYPE_sysenter 5 /* * x86/64 hypervisor: Syscall by 32-bit guest app on AMD CPUs * ('32-on-32-on-64', '32-on-64-on-64') */ #define CALLBACKTYPE_syscall32 7 /* * Disable event deliver during callback? This flag is ignored for event and * NMI callbacks: event delivery is unconditionally disabled. */ #define _CALLBACKF_mask_events 0 #define CALLBACKF_mask_events (1U << _CALLBACKF_mask_events) /* * Register a callback. */ #define CALLBACKOP_register 0 struct callback_register { uint16_t type; uint16_t flags; xen_callback_t address; }; /* * Unregister a callback. * * Not all callbacks can be unregistered. -EINVAL will be returned if * you attempt to unregister such a callback. */ #define CALLBACKOP_unregister 1 struct callback_unregister { uint16_t type; uint16_t _unused; }; #endif /* __XEN_PUBLIC_CALLBACK_H__ */ interface/vcpu.h 0000644 00000020740 14722073410 0007635 0 ustar 00 /****************************************************************************** * vcpu.h * * VCPU initialisation, query, and hotplug. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (c) 2005, Keir Fraser <keir@xensource.com> */ #ifndef __XEN_PUBLIC_VCPU_H__ #define __XEN_PUBLIC_VCPU_H__ /* * Prototype for this hypercall is: * int vcpu_op(int cmd, int vcpuid, void *extra_args) * @cmd == VCPUOP_??? (VCPU operation). * @vcpuid == VCPU to operate on. * @extra_args == Operation-specific extra arguments (NULL if none). */ /* * Initialise a VCPU. Each VCPU can be initialised only once. A * newly-initialised VCPU will not run until it is brought up by VCPUOP_up. * * @extra_arg == pointer to vcpu_guest_context structure containing initial * state for the VCPU. */ #define VCPUOP_initialise 0 /* * Bring up a VCPU. This makes the VCPU runnable. This operation will fail * if the VCPU has not been initialised (VCPUOP_initialise). */ #define VCPUOP_up 1 /* * Bring down a VCPU (i.e., make it non-runnable). * There are a few caveats that callers should observe: * 1. This operation may return, and VCPU_is_up may return false, before the * VCPU stops running (i.e., the command is asynchronous). It is a good * idea to ensure that the VCPU has entered a non-critical loop before * bringing it down. Alternatively, this operation is guaranteed * synchronous if invoked by the VCPU itself. * 2. After a VCPU is initialised, there is currently no way to drop all its * references to domain memory. Even a VCPU that is down still holds * memory references via its pagetable base pointer and GDT. It is good * practise to move a VCPU onto an 'idle' or default page table, LDT and * GDT before bringing it down. */ #define VCPUOP_down 2 /* Returns 1 if the given VCPU is up. */ #define VCPUOP_is_up 3 /* * Return information about the state and running time of a VCPU. * @extra_arg == pointer to vcpu_runstate_info structure. */ #define VCPUOP_get_runstate_info 4 struct vcpu_runstate_info { /* VCPU's current state (RUNSTATE_*). */ int state; /* When was current state entered (system time, ns)? */ uint64_t state_entry_time; /* * Update indicator set in state_entry_time: * When activated via VMASST_TYPE_runstate_update_flag, set during * updates in guest memory mapped copy of vcpu_runstate_info. */ #define XEN_RUNSTATE_UPDATE (1ULL << 63) /* * Time spent in each RUNSTATE_* (ns). The sum of these times is * guaranteed not to drift from system time. */ uint64_t time[4]; }; DEFINE_GUEST_HANDLE_STRUCT(vcpu_runstate_info); /* VCPU is currently running on a physical CPU. */ #define RUNSTATE_running 0 /* VCPU is runnable, but not currently scheduled on any physical CPU. */ #define RUNSTATE_runnable 1 /* VCPU is blocked (a.k.a. idle). It is therefore not runnable. */ #define RUNSTATE_blocked 2 /* * VCPU is not runnable, but it is not blocked. * This is a 'catch all' state for things like hotplug and pauses by the * system administrator (or for critical sections in the hypervisor). * RUNSTATE_blocked dominates this state (it is the preferred state). */ #define RUNSTATE_offline 3 /* * Register a shared memory area from which the guest may obtain its own * runstate information without needing to execute a hypercall. * Notes: * 1. The registered address may be virtual or physical, depending on the * platform. The virtual address should be registered on x86 systems. * 2. Only one shared area may be registered per VCPU. The shared area is * updated by the hypervisor each time the VCPU is scheduled. Thus * runstate.state will always be RUNSTATE_running and * runstate.state_entry_time will indicate the system time at which the * VCPU was last scheduled to run. * @extra_arg == pointer to vcpu_register_runstate_memory_area structure. */ #define VCPUOP_register_runstate_memory_area 5 struct vcpu_register_runstate_memory_area { union { GUEST_HANDLE(vcpu_runstate_info) h; struct vcpu_runstate_info *v; uint64_t p; } addr; }; /* * Set or stop a VCPU's periodic timer. Every VCPU has one periodic timer * which can be set via these commands. Periods smaller than one millisecond * may not be supported. */ #define VCPUOP_set_periodic_timer 6 /* arg == vcpu_set_periodic_timer_t */ #define VCPUOP_stop_periodic_timer 7 /* arg == NULL */ struct vcpu_set_periodic_timer { uint64_t period_ns; }; DEFINE_GUEST_HANDLE_STRUCT(vcpu_set_periodic_timer); /* * Set or stop a VCPU's single-shot timer. Every VCPU has one single-shot * timer which can be set via these commands. */ #define VCPUOP_set_singleshot_timer 8 /* arg == vcpu_set_singleshot_timer_t */ #define VCPUOP_stop_singleshot_timer 9 /* arg == NULL */ struct vcpu_set_singleshot_timer { uint64_t timeout_abs_ns; uint32_t flags; /* VCPU_SSHOTTMR_??? */ }; DEFINE_GUEST_HANDLE_STRUCT(vcpu_set_singleshot_timer); /* Flags to VCPUOP_set_singleshot_timer. */ /* Require the timeout to be in the future (return -ETIME if it's passed). */ #define _VCPU_SSHOTTMR_future (0) #define VCPU_SSHOTTMR_future (1U << _VCPU_SSHOTTMR_future) /* * Register a memory location in the guest address space for the * vcpu_info structure. This allows the guest to place the vcpu_info * structure in a convenient place, such as in a per-cpu data area. * The pointer need not be page aligned, but the structure must not * cross a page boundary. */ #define VCPUOP_register_vcpu_info 10 /* arg == struct vcpu_info */ struct vcpu_register_vcpu_info { uint64_t mfn; /* mfn of page to place vcpu_info */ uint32_t offset; /* offset within page */ uint32_t rsvd; /* unused */ }; DEFINE_GUEST_HANDLE_STRUCT(vcpu_register_vcpu_info); /* Send an NMI to the specified VCPU. @extra_arg == NULL. */ #define VCPUOP_send_nmi 11 /* * Get the physical ID information for a pinned vcpu's underlying physical * processor. The physical ID informmation is architecture-specific. * On x86: id[31:0]=apic_id, id[63:32]=acpi_id. * This command returns -EINVAL if it is not a valid operation for this VCPU. */ #define VCPUOP_get_physid 12 /* arg == vcpu_get_physid_t */ struct vcpu_get_physid { uint64_t phys_id; }; DEFINE_GUEST_HANDLE_STRUCT(vcpu_get_physid); #define xen_vcpu_physid_to_x86_apicid(physid) ((uint32_t)(physid)) #define xen_vcpu_physid_to_x86_acpiid(physid) ((uint32_t)((physid) >> 32)) /* * Register a memory location to get a secondary copy of the vcpu time * parameters. The master copy still exists as part of the vcpu shared * memory area, and this secondary copy is updated whenever the master copy * is updated (and using the same versioning scheme for synchronisation). * * The intent is that this copy may be mapped (RO) into userspace so * that usermode can compute system time using the time info and the * tsc. Usermode will see an array of vcpu_time_info structures, one * for each vcpu, and choose the right one by an existing mechanism * which allows it to get the current vcpu number (such as via a * segment limit). It can then apply the normal algorithm to compute * system time from the tsc. * * @extra_arg == pointer to vcpu_register_time_info_memory_area structure. */ #define VCPUOP_register_vcpu_time_memory_area 13 DEFINE_GUEST_HANDLE_STRUCT(vcpu_time_info); struct vcpu_register_time_memory_area { union { GUEST_HANDLE(vcpu_time_info) h; struct pvclock_vcpu_time_info *v; uint64_t p; } addr; }; DEFINE_GUEST_HANDLE_STRUCT(vcpu_register_time_memory_area); #endif /* __XEN_PUBLIC_VCPU_H__ */ interface/xenpmu.h 0000644 00000004744 14722073410 0010202 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef __XEN_PUBLIC_XENPMU_H__ #define __XEN_PUBLIC_XENPMU_H__ #include "xen.h" #define XENPMU_VER_MAJ 0 #define XENPMU_VER_MIN 1 /* * ` enum neg_errnoval * ` HYPERVISOR_xenpmu_op(enum xenpmu_op cmd, struct xenpmu_params *args); * * @cmd == XENPMU_* (PMU operation) * @args == struct xenpmu_params */ /* ` enum xenpmu_op { */ #define XENPMU_mode_get 0 /* Also used for getting PMU version */ #define XENPMU_mode_set 1 #define XENPMU_feature_get 2 #define XENPMU_feature_set 3 #define XENPMU_init 4 #define XENPMU_finish 5 #define XENPMU_lvtpc_set 6 #define XENPMU_flush 7 /* ` } */ /* Parameters structure for HYPERVISOR_xenpmu_op call */ struct xen_pmu_params { /* IN/OUT parameters */ struct { uint32_t maj; uint32_t min; } version; uint64_t val; /* IN parameters */ uint32_t vcpu; uint32_t pad; }; /* PMU modes: * - XENPMU_MODE_OFF: No PMU virtualization * - XENPMU_MODE_SELF: Guests can profile themselves * - XENPMU_MODE_HV: Guests can profile themselves, dom0 profiles * itself and Xen * - XENPMU_MODE_ALL: Only dom0 has access to VPMU and it profiles * everyone: itself, the hypervisor and the guests. */ #define XENPMU_MODE_OFF 0 #define XENPMU_MODE_SELF (1<<0) #define XENPMU_MODE_HV (1<<1) #define XENPMU_MODE_ALL (1<<2) /* * PMU features: * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD) */ #define XENPMU_FEATURE_INTEL_BTS 1 /* * Shared PMU data between hypervisor and PV(H) domains. * * The hypervisor fills out this structure during PMU interrupt and sends an * interrupt to appropriate VCPU. * Architecture-independent fields of xen_pmu_data are WO for the hypervisor * and RO for the guest but some fields in xen_pmu_arch can be writable * by both the hypervisor and the guest (see arch-$arch/pmu.h). */ struct xen_pmu_data { /* Interrupted VCPU */ uint32_t vcpu_id; /* * Physical processor on which the interrupt occurred. On non-privileged * guests set to vcpu_id; */ uint32_t pcpu_id; /* * Domain that was interrupted. On non-privileged guests set to * DOMID_SELF. * On privileged guests can be DOMID_SELF, DOMID_XEN, or, when in * XENPMU_MODE_ALL mode, domain ID of another domain. */ domid_t domain_id; uint8_t pad[6]; /* Architecture-specific information */ struct xen_pmu_arch pmu; }; #endif /* __XEN_PUBLIC_XENPMU_H__ */ interface/version.h 0000644 00000004062 14722073410 0010344 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /****************************************************************************** * version.h * * Xen version, type, and compile information. * * Copyright (c) 2005, Nguyen Anh Quynh <aquynh@gmail.com> * Copyright (c) 2005, Keir Fraser <keir@xensource.com> */ #ifndef __XEN_PUBLIC_VERSION_H__ #define __XEN_PUBLIC_VERSION_H__ /* NB. All ops return zero on success, except XENVER_version. */ /* arg == NULL; returns major:minor (16:16). */ #define XENVER_version 0 /* arg == xen_extraversion_t. */ #define XENVER_extraversion 1 struct xen_extraversion { char extraversion[16]; }; #define XEN_EXTRAVERSION_LEN (sizeof(struct xen_extraversion)) /* arg == xen_compile_info_t. */ #define XENVER_compile_info 2 struct xen_compile_info { char compiler[64]; char compile_by[16]; char compile_domain[32]; char compile_date[32]; }; #define XENVER_capabilities 3 struct xen_capabilities_info { char info[1024]; }; #define XEN_CAPABILITIES_INFO_LEN (sizeof(struct xen_capabilities_info)) #define XENVER_changeset 4 struct xen_changeset_info { char info[64]; }; #define XEN_CHANGESET_INFO_LEN (sizeof(struct xen_changeset_info)) #define XENVER_platform_parameters 5 struct xen_platform_parameters { xen_ulong_t virt_start; }; #define XENVER_get_features 6 struct xen_feature_info { unsigned int submap_idx; /* IN: which 32-bit submap to return */ uint32_t submap; /* OUT: 32-bit submap */ }; /* Declares the features reported by XENVER_get_features. */ #include <xen/interface/features.h> /* arg == NULL; returns host memory page size. */ #define XENVER_pagesize 7 /* arg == xen_domain_handle_t. */ #define XENVER_guest_handle 8 #define XENVER_commandline 9 struct xen_commandline { char buf[1024]; }; /* * Return value is the number of bytes written, or XEN_Exx on error. * Calling with empty parameter returns the size of build_id. */ #define XENVER_build_id 10 struct xen_build_id { uint32_t len; /* IN: size of buf[]. */ unsigned char buf[]; }; #endif /* __XEN_PUBLIC_VERSION_H__ */ interface/sched.h 0000644 00000015107 14722073410 0007747 0 ustar 00 /****************************************************************************** * sched.h * * Scheduler state interactions * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (c) 2005, Keir Fraser <keir@xensource.com> */ #ifndef __XEN_PUBLIC_SCHED_H__ #define __XEN_PUBLIC_SCHED_H__ #include <xen/interface/event_channel.h> /* * Guest Scheduler Operations * * The SCHEDOP interface provides mechanisms for a guest to interact * with the scheduler, including yield, blocking and shutting itself * down. */ /* * The prototype for this hypercall is: * long HYPERVISOR_sched_op(enum sched_op cmd, void *arg, ...) * * @cmd == SCHEDOP_??? (scheduler operation). * @arg == Operation-specific extra argument(s), as described below. * ... == Additional Operation-specific extra arguments, described below. * * Versions of Xen prior to 3.0.2 provided only the following legacy version * of this hypercall, supporting only the commands yield, block and shutdown: * long sched_op(int cmd, unsigned long arg) * @cmd == SCHEDOP_??? (scheduler operation). * @arg == 0 (SCHEDOP_yield and SCHEDOP_block) * == SHUTDOWN_* code (SCHEDOP_shutdown) * * This legacy version is available to new guests as: * long HYPERVISOR_sched_op_compat(enum sched_op cmd, unsigned long arg) */ /* * Voluntarily yield the CPU. * @arg == NULL. */ #define SCHEDOP_yield 0 /* * Block execution of this VCPU until an event is received for processing. * If called with event upcalls masked, this operation will atomically * reenable event delivery and check for pending events before blocking the * VCPU. This avoids a "wakeup waiting" race. * @arg == NULL. */ #define SCHEDOP_block 1 /* * Halt execution of this domain (all VCPUs) and notify the system controller. * @arg == pointer to sched_shutdown structure. * * If the sched_shutdown_t reason is SHUTDOWN_suspend then * x86 PV guests must also set RDX (EDX for 32-bit guests) to the MFN * of the guest's start info page. RDX/EDX is the third hypercall * argument. * * In addition, which reason is SHUTDOWN_suspend this hypercall * returns 1 if suspend was cancelled or the domain was merely * checkpointed, and 0 if it is resuming in a new domain. */ #define SCHEDOP_shutdown 2 /* * Poll a set of event-channel ports. Return when one or more are pending. An * optional timeout may be specified. * @arg == pointer to sched_poll structure. */ #define SCHEDOP_poll 3 /* * Declare a shutdown for another domain. The main use of this function is * in interpreting shutdown requests and reasons for fully-virtualized * domains. A para-virtualized domain may use SCHEDOP_shutdown directly. * @arg == pointer to sched_remote_shutdown structure. */ #define SCHEDOP_remote_shutdown 4 /* * Latch a shutdown code, so that when the domain later shuts down it * reports this code to the control tools. * @arg == sched_shutdown, as for SCHEDOP_shutdown. */ #define SCHEDOP_shutdown_code 5 /* * Setup, poke and destroy a domain watchdog timer. * @arg == pointer to sched_watchdog structure. * With id == 0, setup a domain watchdog timer to cause domain shutdown * after timeout, returns watchdog id. * With id != 0 and timeout == 0, destroy domain watchdog timer. * With id != 0 and timeout != 0, poke watchdog timer and set new timeout. */ #define SCHEDOP_watchdog 6 /* * Override the current vcpu affinity by pinning it to one physical cpu or * undo this override restoring the previous affinity. * @arg == pointer to sched_pin_override structure. * * A negative pcpu value will undo a previous pin override and restore the * previous cpu affinity. * This call is allowed for the hardware domain only and requires the cpu * to be part of the domain's cpupool. */ #define SCHEDOP_pin_override 7 struct sched_shutdown { unsigned int reason; /* SHUTDOWN_* => shutdown reason */ }; DEFINE_GUEST_HANDLE_STRUCT(sched_shutdown); struct sched_poll { GUEST_HANDLE(evtchn_port_t) ports; unsigned int nr_ports; uint64_t timeout; }; DEFINE_GUEST_HANDLE_STRUCT(sched_poll); struct sched_remote_shutdown { domid_t domain_id; /* Remote domain ID */ unsigned int reason; /* SHUTDOWN_* => shutdown reason */ }; DEFINE_GUEST_HANDLE_STRUCT(sched_remote_shutdown); struct sched_watchdog { uint32_t id; /* watchdog ID */ uint32_t timeout; /* timeout */ }; DEFINE_GUEST_HANDLE_STRUCT(sched_watchdog); struct sched_pin_override { int32_t pcpu; }; DEFINE_GUEST_HANDLE_STRUCT(sched_pin_override); /* * Reason codes for SCHEDOP_shutdown. These may be interpreted by control * software to determine the appropriate action. For the most part, Xen does * not care about the shutdown code. */ #define SHUTDOWN_poweroff 0 /* Domain exited normally. Clean up and kill. */ #define SHUTDOWN_reboot 1 /* Clean up, kill, and then restart. */ #define SHUTDOWN_suspend 2 /* Clean up, save suspend info, kill. */ #define SHUTDOWN_crash 3 /* Tell controller we've crashed. */ #define SHUTDOWN_watchdog 4 /* Restart because watchdog time expired. */ /* * Domain asked to perform 'soft reset' for it. The expected behavior is to * reset internal Xen state for the domain returning it to the point where it * was created but leaving the domain's memory contents and vCPU contexts * intact. This will allow the domain to start over and set up all Xen specific * interfaces again. */ #define SHUTDOWN_soft_reset 5 #define SHUTDOWN_MAX 5 /* Maximum valid shutdown reason. */ #endif /* __XEN_PUBLIC_SCHED_H__ */ interface/grant_table.h 0000644 00000050720 14722073410 0011143 0 ustar 00 /****************************************************************************** * grant_table.h * * Interface for granting foreign access to page frames, and receiving * page-ownership transfers. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (c) 2004, K A Fraser */ #ifndef __XEN_PUBLIC_GRANT_TABLE_H__ #define __XEN_PUBLIC_GRANT_TABLE_H__ #include <xen/interface/xen.h> /*********************************** * GRANT TABLE REPRESENTATION */ /* Some rough guidelines on accessing and updating grant-table entries * in a concurrency-safe manner. For more information, Linux contains a * reference implementation for guest OSes (arch/xen/kernel/grant_table.c). * * NB. WMB is a no-op on current-generation x86 processors. However, a * compiler barrier will still be required. * * Introducing a valid entry into the grant table: * 1. Write ent->domid. * 2. Write ent->frame: * GTF_permit_access: Frame to which access is permitted. * GTF_accept_transfer: Pseudo-phys frame slot being filled by new * frame, or zero if none. * 3. Write memory barrier (WMB). * 4. Write ent->flags, inc. valid type. * * Invalidating an unused GTF_permit_access entry: * 1. flags = ent->flags. * 2. Observe that !(flags & (GTF_reading|GTF_writing)). * 3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0). * NB. No need for WMB as reuse of entry is control-dependent on success of * step 3, and all architectures guarantee ordering of ctrl-dep writes. * * Invalidating an in-use GTF_permit_access entry: * This cannot be done directly. Request assistance from the domain controller * which can set a timeout on the use of a grant entry and take necessary * action. (NB. This is not yet implemented!). * * Invalidating an unused GTF_accept_transfer entry: * 1. flags = ent->flags. * 2. Observe that !(flags & GTF_transfer_committed). [*] * 3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0). * NB. No need for WMB as reuse of entry is control-dependent on success of * step 3, and all architectures guarantee ordering of ctrl-dep writes. * [*] If GTF_transfer_committed is set then the grant entry is 'committed'. * The guest must /not/ modify the grant entry until the address of the * transferred frame is written. It is safe for the guest to spin waiting * for this to occur (detect by observing GTF_transfer_completed in * ent->flags). * * Invalidating a committed GTF_accept_transfer entry: * 1. Wait for (ent->flags & GTF_transfer_completed). * * Changing a GTF_permit_access from writable to read-only: * Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing. * * Changing a GTF_permit_access from read-only to writable: * Use SMP-safe bit-setting instruction. */ /* * Reference to a grant entry in a specified domain's grant table. */ typedef uint32_t grant_ref_t; /* * A grant table comprises a packed array of grant entries in one or more * page frames shared between Xen and a guest. * [XEN]: This field is written by Xen and read by the sharing guest. * [GST]: This field is written by the guest and read by Xen. */ /* * Version 1 of the grant table entry structure is maintained purely * for backwards compatibility. New guests should use version 2. */ struct grant_entry_v1 { /* GTF_xxx: various type and flag information. [XEN,GST] */ uint16_t flags; /* The domain being granted foreign privileges. [GST] */ domid_t domid; /* * GTF_permit_access: Frame that @domid is allowed to map and access. [GST] * GTF_accept_transfer: Frame whose ownership transferred by @domid. [XEN] */ uint32_t frame; }; /* * Type of grant entry. * GTF_invalid: This grant entry grants no privileges. * GTF_permit_access: Allow @domid to map/access @frame. * GTF_accept_transfer: Allow @domid to transfer ownership of one page frame * to this guest. Xen writes the page number to @frame. * GTF_transitive: Allow @domid to transitively access a subrange of * @trans_grant in @trans_domid. No mappings are allowed. */ #define GTF_invalid (0U<<0) #define GTF_permit_access (1U<<0) #define GTF_accept_transfer (2U<<0) #define GTF_transitive (3U<<0) #define GTF_type_mask (3U<<0) /* * Subflags for GTF_permit_access. * GTF_readonly: Restrict @domid to read-only mappings and accesses. [GST] * GTF_reading: Grant entry is currently mapped for reading by @domid. [XEN] * GTF_writing: Grant entry is currently mapped for writing by @domid. [XEN] * GTF_sub_page: Grant access to only a subrange of the page. @domid * will only be allowed to copy from the grant, and not * map it. [GST] */ #define _GTF_readonly (2) #define GTF_readonly (1U<<_GTF_readonly) #define _GTF_reading (3) #define GTF_reading (1U<<_GTF_reading) #define _GTF_writing (4) #define GTF_writing (1U<<_GTF_writing) #define _GTF_sub_page (8) #define GTF_sub_page (1U<<_GTF_sub_page) /* * Subflags for GTF_accept_transfer: * GTF_transfer_committed: Xen sets this flag to indicate that it is committed * to transferring ownership of a page frame. When a guest sees this flag * it must /not/ modify the grant entry until GTF_transfer_completed is * set by Xen. * GTF_transfer_completed: It is safe for the guest to spin-wait on this flag * after reading GTF_transfer_committed. Xen will always write the frame * address, followed by ORing this flag, in a timely manner. */ #define _GTF_transfer_committed (2) #define GTF_transfer_committed (1U<<_GTF_transfer_committed) #define _GTF_transfer_completed (3) #define GTF_transfer_completed (1U<<_GTF_transfer_completed) /* * Version 2 grant table entries. These fulfil the same role as * version 1 entries, but can represent more complicated operations. * Any given domain will have either a version 1 or a version 2 table, * and every entry in the table will be the same version. * * The interface by which domains use grant references does not depend * on the grant table version in use by the other domain. */ /* * Version 1 and version 2 grant entries share a common prefix. The * fields of the prefix are documented as part of struct * grant_entry_v1. */ struct grant_entry_header { uint16_t flags; domid_t domid; }; /* * Version 2 of the grant entry structure, here is a union because three * different types are suppotted: full_page, sub_page and transitive. */ union grant_entry_v2 { struct grant_entry_header hdr; /* * This member is used for V1-style full page grants, where either: * * -- hdr.type is GTF_accept_transfer, or * -- hdr.type is GTF_permit_access and GTF_sub_page is not set. * * In that case, the frame field has the same semantics as the * field of the same name in the V1 entry structure. */ struct { struct grant_entry_header hdr; uint32_t pad0; uint64_t frame; } full_page; /* * If the grant type is GTF_grant_access and GTF_sub_page is set, * @domid is allowed to access bytes [@page_off,@page_off+@length) * in frame @frame. */ struct { struct grant_entry_header hdr; uint16_t page_off; uint16_t length; uint64_t frame; } sub_page; /* * If the grant is GTF_transitive, @domid is allowed to use the * grant @gref in domain @trans_domid, as if it was the local * domain. Obviously, the transitive access must be compatible * with the original grant. */ struct { struct grant_entry_header hdr; domid_t trans_domid; uint16_t pad0; grant_ref_t gref; } transitive; uint32_t __spacer[4]; /* Pad to a power of two */ }; typedef uint16_t grant_status_t; /*********************************** * GRANT TABLE QUERIES AND USES */ /* * Handle to track a mapping created via a grant reference. */ typedef uint32_t grant_handle_t; /* * GNTTABOP_map_grant_ref: Map the grant entry (<dom>,<ref>) for access * by devices and/or host CPUs. If successful, <handle> is a tracking number * that must be presented later to destroy the mapping(s). On error, <handle> * is a negative status code. * NOTES: * 1. If GNTMAP_device_map is specified then <dev_bus_addr> is the address * via which I/O devices may access the granted frame. * 2. If GNTMAP_host_map is specified then a mapping will be added at * either a host virtual address in the current address space, or at * a PTE at the specified machine address. The type of mapping to * perform is selected through the GNTMAP_contains_pte flag, and the * address is specified in <host_addr>. * 3. Mappings should only be destroyed via GNTTABOP_unmap_grant_ref. If a * host mapping is destroyed by other means then it is *NOT* guaranteed * to be accounted to the correct grant reference! */ #define GNTTABOP_map_grant_ref 0 struct gnttab_map_grant_ref { /* IN parameters. */ uint64_t host_addr; uint32_t flags; /* GNTMAP_* */ grant_ref_t ref; domid_t dom; /* OUT parameters. */ int16_t status; /* GNTST_* */ grant_handle_t handle; uint64_t dev_bus_addr; }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_map_grant_ref); /* * GNTTABOP_unmap_grant_ref: Destroy one or more grant-reference mappings * tracked by <handle>. If <host_addr> or <dev_bus_addr> is zero, that * field is ignored. If non-zero, they must refer to a device/host mapping * that is tracked by <handle> * NOTES: * 1. The call may fail in an undefined manner if either mapping is not * tracked by <handle>. * 3. After executing a batch of unmaps, it is guaranteed that no stale * mappings will remain in the device or host TLBs. */ #define GNTTABOP_unmap_grant_ref 1 struct gnttab_unmap_grant_ref { /* IN parameters. */ uint64_t host_addr; uint64_t dev_bus_addr; grant_handle_t handle; /* OUT parameters. */ int16_t status; /* GNTST_* */ }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_unmap_grant_ref); /* * GNTTABOP_setup_table: Set up a grant table for <dom> comprising at least * <nr_frames> pages. The frame addresses are written to the <frame_list>. * Only <nr_frames> addresses are written, even if the table is larger. * NOTES: * 1. <dom> may be specified as DOMID_SELF. * 2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF. * 3. Xen may not support more than a single grant-table page per domain. */ #define GNTTABOP_setup_table 2 struct gnttab_setup_table { /* IN parameters. */ domid_t dom; uint32_t nr_frames; /* OUT parameters. */ int16_t status; /* GNTST_* */ GUEST_HANDLE(xen_pfn_t) frame_list; }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_setup_table); /* * GNTTABOP_dump_table: Dump the contents of the grant table to the * xen console. Debugging use only. */ #define GNTTABOP_dump_table 3 struct gnttab_dump_table { /* IN parameters. */ domid_t dom; /* OUT parameters. */ int16_t status; /* GNTST_* */ }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_dump_table); /* * GNTTABOP_transfer_grant_ref: Transfer <frame> to a foreign domain. The * foreign domain has previously registered its interest in the transfer via * <domid, ref>. * * Note that, even if the transfer fails, the specified page no longer belongs * to the calling domain *unless* the error is GNTST_bad_page. */ #define GNTTABOP_transfer 4 struct gnttab_transfer { /* IN parameters. */ xen_pfn_t mfn; domid_t domid; grant_ref_t ref; /* OUT parameters. */ int16_t status; }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_transfer); /* * GNTTABOP_copy: Hypervisor based copy * source and destinations can be eithers MFNs or, for foreign domains, * grant references. the foreign domain has to grant read/write access * in its grant table. * * The flags specify what type source and destinations are (either MFN * or grant reference). * * Note that this can also be used to copy data between two domains * via a third party if the source and destination domains had previously * grant appropriate access to their pages to the third party. * * source_offset specifies an offset in the source frame, dest_offset * the offset in the target frame and len specifies the number of * bytes to be copied. */ #define _GNTCOPY_source_gref (0) #define GNTCOPY_source_gref (1<<_GNTCOPY_source_gref) #define _GNTCOPY_dest_gref (1) #define GNTCOPY_dest_gref (1<<_GNTCOPY_dest_gref) #define GNTTABOP_copy 5 struct gnttab_copy { /* IN parameters. */ struct { union { grant_ref_t ref; xen_pfn_t gmfn; } u; domid_t domid; uint16_t offset; } source, dest; uint16_t len; uint16_t flags; /* GNTCOPY_* */ /* OUT parameters. */ int16_t status; }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_copy); /* * GNTTABOP_query_size: Query the current and maximum sizes of the shared * grant table. * NOTES: * 1. <dom> may be specified as DOMID_SELF. * 2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF. */ #define GNTTABOP_query_size 6 struct gnttab_query_size { /* IN parameters. */ domid_t dom; /* OUT parameters. */ uint32_t nr_frames; uint32_t max_nr_frames; int16_t status; /* GNTST_* */ }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_query_size); /* * GNTTABOP_unmap_and_replace: Destroy one or more grant-reference mappings * tracked by <handle> but atomically replace the page table entry with one * pointing to the machine address under <new_addr>. <new_addr> will be * redirected to the null entry. * NOTES: * 1. The call may fail in an undefined manner if either mapping is not * tracked by <handle>. * 2. After executing a batch of unmaps, it is guaranteed that no stale * mappings will remain in the device or host TLBs. */ #define GNTTABOP_unmap_and_replace 7 struct gnttab_unmap_and_replace { /* IN parameters. */ uint64_t host_addr; uint64_t new_addr; grant_handle_t handle; /* OUT parameters. */ int16_t status; /* GNTST_* */ }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_unmap_and_replace); /* * GNTTABOP_set_version: Request a particular version of the grant * table shared table structure. This operation can only be performed * once in any given domain. It must be performed before any grants * are activated; otherwise, the domain will be stuck with version 1. * The only defined versions are 1 and 2. */ #define GNTTABOP_set_version 8 struct gnttab_set_version { /* IN parameters */ uint32_t version; }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_set_version); /* * GNTTABOP_get_status_frames: Get the list of frames used to store grant * status for <dom>. In grant format version 2, the status is separated * from the other shared grant fields to allow more efficient synchronization * using barriers instead of atomic cmpexch operations. * <nr_frames> specify the size of vector <frame_list>. * The frame addresses are returned in the <frame_list>. * Only <nr_frames> addresses are returned, even if the table is larger. * NOTES: * 1. <dom> may be specified as DOMID_SELF. * 2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF. */ #define GNTTABOP_get_status_frames 9 struct gnttab_get_status_frames { /* IN parameters. */ uint32_t nr_frames; domid_t dom; /* OUT parameters. */ int16_t status; /* GNTST_* */ GUEST_HANDLE(uint64_t) frame_list; }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_status_frames); /* * GNTTABOP_get_version: Get the grant table version which is in * effect for domain <dom>. */ #define GNTTABOP_get_version 10 struct gnttab_get_version { /* IN parameters */ domid_t dom; uint16_t pad; /* OUT parameters */ uint32_t version; }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_get_version); /* * Issue one or more cache maintenance operations on a portion of a * page granted to the calling domain by a foreign domain. */ #define GNTTABOP_cache_flush 12 struct gnttab_cache_flush { union { uint64_t dev_bus_addr; grant_ref_t ref; } a; uint16_t offset; /* offset from start of grant */ uint16_t length; /* size within the grant */ #define GNTTAB_CACHE_CLEAN (1<<0) #define GNTTAB_CACHE_INVAL (1<<1) #define GNTTAB_CACHE_SOURCE_GREF (1<<31) uint32_t op; }; DEFINE_GUEST_HANDLE_STRUCT(gnttab_cache_flush); /* * Bitfield values for update_pin_status.flags. */ /* Map the grant entry for access by I/O devices. */ #define _GNTMAP_device_map (0) #define GNTMAP_device_map (1<<_GNTMAP_device_map) /* Map the grant entry for access by host CPUs. */ #define _GNTMAP_host_map (1) #define GNTMAP_host_map (1<<_GNTMAP_host_map) /* Accesses to the granted frame will be restricted to read-only access. */ #define _GNTMAP_readonly (2) #define GNTMAP_readonly (1<<_GNTMAP_readonly) /* * GNTMAP_host_map subflag: * 0 => The host mapping is usable only by the guest OS. * 1 => The host mapping is usable by guest OS + current application. */ #define _GNTMAP_application_map (3) #define GNTMAP_application_map (1<<_GNTMAP_application_map) /* * GNTMAP_contains_pte subflag: * 0 => This map request contains a host virtual address. * 1 => This map request contains the machine addess of the PTE to update. */ #define _GNTMAP_contains_pte (4) #define GNTMAP_contains_pte (1<<_GNTMAP_contains_pte) /* * Bits to be placed in guest kernel available PTE bits (architecture * dependent; only supported when XENFEAT_gnttab_map_avail_bits is set). */ #define _GNTMAP_guest_avail0 (16) #define GNTMAP_guest_avail_mask ((uint32_t)~0 << _GNTMAP_guest_avail0) /* * Values for error status returns. All errors are -ve. */ #define GNTST_okay (0) /* Normal return. */ #define GNTST_general_error (-1) /* General undefined error. */ #define GNTST_bad_domain (-2) /* Unrecognsed domain id. */ #define GNTST_bad_gntref (-3) /* Unrecognised or inappropriate gntref. */ #define GNTST_bad_handle (-4) /* Unrecognised or inappropriate handle. */ #define GNTST_bad_virt_addr (-5) /* Inappropriate virtual address to map. */ #define GNTST_bad_dev_addr (-6) /* Inappropriate device address to unmap.*/ #define GNTST_no_device_space (-7) /* Out of space in I/O MMU. */ #define GNTST_permission_denied (-8) /* Not enough privilege for operation. */ #define GNTST_bad_page (-9) /* Specified page was invalid for op. */ #define GNTST_bad_copy_arg (-10) /* copy arguments cross page boundary. */ #define GNTST_address_too_big (-11) /* transfer page address too large. */ #define GNTST_eagain (-12) /* Operation not done; try again. */ #define GNTTABOP_error_msgs { \ "okay", \ "undefined error", \ "unrecognised domain id", \ "invalid grant reference", \ "invalid mapping handle", \ "invalid virtual address", \ "invalid device address", \ "no spare translation slot in the I/O MMU", \ "permission denied", \ "bad page", \ "copy arguments cross page boundary", \ "page address size too large", \ "operation not done; try again" \ } #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */ interface/hvm/params.h 0000644 00000011003 14722073410 0010725 0 ustar 00 /* * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef __XEN_PUBLIC_HVM_PARAMS_H__ #define __XEN_PUBLIC_HVM_PARAMS_H__ #include <xen/interface/hvm/hvm_op.h> /* * Parameter space for HVMOP_{set,get}_param. */ #define HVM_PARAM_CALLBACK_IRQ 0 /* * How should CPU0 event-channel notifications be delivered? * * If val == 0 then CPU0 event-channel notifications are not delivered. * If val != 0, val[63:56] encodes the type, as follows: */ #define HVM_PARAM_CALLBACK_TYPE_GSI 0 /* * val[55:0] is a delivery GSI. GSI 0 cannot be used, as it aliases val == 0, * and disables all notifications. */ #define HVM_PARAM_CALLBACK_TYPE_PCI_INTX 1 /* * val[55:0] is a delivery PCI INTx line: * Domain = val[47:32], Bus = val[31:16] DevFn = val[15:8], IntX = val[1:0] */ #if defined(__i386__) || defined(__x86_64__) #define HVM_PARAM_CALLBACK_TYPE_VECTOR 2 /* * val[7:0] is a vector number. Check for XENFEAT_hvm_callback_vector to know * if this delivery method is available. */ #elif defined(__arm__) || defined(__aarch64__) #define HVM_PARAM_CALLBACK_TYPE_PPI 2 /* * val[55:16] needs to be zero. * val[15:8] is interrupt flag of the PPI used by event-channel: * bit 8: the PPI is edge(1) or level(0) triggered * bit 9: the PPI is active low(1) or high(0) * val[7:0] is a PPI number used by event-channel. * This is only used by ARM/ARM64 and masking/eoi the interrupt associated to * the notification is handled by the interrupt controller. */ #endif #define HVM_PARAM_STORE_PFN 1 #define HVM_PARAM_STORE_EVTCHN 2 #define HVM_PARAM_PAE_ENABLED 4 #define HVM_PARAM_IOREQ_PFN 5 #define HVM_PARAM_BUFIOREQ_PFN 6 /* * Set mode for virtual timers (currently x86 only): * delay_for_missed_ticks (default): * Do not advance a vcpu's time beyond the correct delivery time for * interrupts that have been missed due to preemption. Deliver missed * interrupts when the vcpu is rescheduled and advance the vcpu's virtual * time stepwise for each one. * no_delay_for_missed_ticks: * As above, missed interrupts are delivered, but guest time always tracks * wallclock (i.e., real) time while doing so. * no_missed_ticks_pending: * No missed interrupts are held pending. Instead, to ensure ticks are * delivered at some non-zero rate, if we detect missed ticks then the * internal tick alarm is not disabled if the VCPU is preempted during the * next tick period. * one_missed_tick_pending: * Missed interrupts are collapsed together and delivered as one 'late tick'. * Guest time always tracks wallclock (i.e., real) time. */ #define HVM_PARAM_TIMER_MODE 10 #define HVMPTM_delay_for_missed_ticks 0 #define HVMPTM_no_delay_for_missed_ticks 1 #define HVMPTM_no_missed_ticks_pending 2 #define HVMPTM_one_missed_tick_pending 3 /* Boolean: Enable virtual HPET (high-precision event timer)? (x86-only) */ #define HVM_PARAM_HPET_ENABLED 11 /* Identity-map page directory used by Intel EPT when CR0.PG=0. */ #define HVM_PARAM_IDENT_PT 12 /* Device Model domain, defaults to 0. */ #define HVM_PARAM_DM_DOMAIN 13 /* ACPI S state: currently support S0 and S3 on x86. */ #define HVM_PARAM_ACPI_S_STATE 14 /* TSS used on Intel when CR0.PE=0. */ #define HVM_PARAM_VM86_TSS 15 /* Boolean: Enable aligning all periodic vpts to reduce interrupts */ #define HVM_PARAM_VPT_ALIGN 16 /* Console debug shared memory ring and event channel */ #define HVM_PARAM_CONSOLE_PFN 17 #define HVM_PARAM_CONSOLE_EVTCHN 18 #define HVM_NR_PARAMS 19 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */ interface/hvm/hvm_vcpu.h 0000644 00000007645 14722073410 0011312 0 ustar 00 /* * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (c) 2015, Roger Pau Monne <roger.pau@citrix.com> */ #ifndef __XEN_PUBLIC_HVM_HVM_VCPU_H__ #define __XEN_PUBLIC_HVM_HVM_VCPU_H__ #include "../xen.h" struct vcpu_hvm_x86_32 { uint32_t eax; uint32_t ecx; uint32_t edx; uint32_t ebx; uint32_t esp; uint32_t ebp; uint32_t esi; uint32_t edi; uint32_t eip; uint32_t eflags; uint32_t cr0; uint32_t cr3; uint32_t cr4; uint32_t pad1; /* * EFER should only be used to set the NXE bit (if required) * when starting a vCPU in 32bit mode with paging enabled or * to set the LME/LMA bits in order to start the vCPU in * compatibility mode. */ uint64_t efer; uint32_t cs_base; uint32_t ds_base; uint32_t ss_base; uint32_t es_base; uint32_t tr_base; uint32_t cs_limit; uint32_t ds_limit; uint32_t ss_limit; uint32_t es_limit; uint32_t tr_limit; uint16_t cs_ar; uint16_t ds_ar; uint16_t ss_ar; uint16_t es_ar; uint16_t tr_ar; uint16_t pad2[3]; }; /* * The layout of the _ar fields of the segment registers is the * following: * * Bits [0,3]: type (bits 40-43). * Bit 4: s (descriptor type, bit 44). * Bit [5,6]: dpl (descriptor privilege level, bits 45-46). * Bit 7: p (segment-present, bit 47). * Bit 8: avl (available for system software, bit 52). * Bit 9: l (64-bit code segment, bit 53). * Bit 10: db (meaning depends on the segment, bit 54). * Bit 11: g (granularity, bit 55) * Bits [12,15]: unused, must be blank. * * A more complete description of the meaning of this fields can be * obtained from the Intel SDM, Volume 3, section 3.4.5. */ struct vcpu_hvm_x86_64 { uint64_t rax; uint64_t rcx; uint64_t rdx; uint64_t rbx; uint64_t rsp; uint64_t rbp; uint64_t rsi; uint64_t rdi; uint64_t rip; uint64_t rflags; uint64_t cr0; uint64_t cr3; uint64_t cr4; uint64_t efer; /* * Using VCPU_HVM_MODE_64B implies that the vCPU is launched * directly in long mode, so the cached parts of the segment * registers get set to match that environment. * * If the user wants to launch the vCPU in compatibility mode * the 32-bit structure should be used instead. */ }; struct vcpu_hvm_context { #define VCPU_HVM_MODE_32B 0 /* 32bit fields of the structure will be used. */ #define VCPU_HVM_MODE_64B 1 /* 64bit fields of the structure will be used. */ uint32_t mode; uint32_t pad; /* CPU registers. */ union { struct vcpu_hvm_x86_32 x86_32; struct vcpu_hvm_x86_64 x86_64; } cpu_regs; }; typedef struct vcpu_hvm_context vcpu_hvm_context_t; #endif /* __XEN_PUBLIC_HVM_HVM_VCPU_H__ */ /* * Local variables: * mode: C * c-file-style: "BSD" * c-basic-offset: 4 * tab-width: 4 * indent-tabs-mode: nil * End: */ interface/hvm/hvm_op.h 0000644 00000005001 14722073410 0010733 0 ustar 00 /* * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef __XEN_PUBLIC_HVM_HVM_OP_H__ #define __XEN_PUBLIC_HVM_HVM_OP_H__ /* Get/set subcommands: the second argument of the hypercall is a * pointer to a xen_hvm_param struct. */ #define HVMOP_set_param 0 #define HVMOP_get_param 1 struct xen_hvm_param { domid_t domid; /* IN */ uint32_t index; /* IN */ uint64_t value; /* IN/OUT */ }; DEFINE_GUEST_HANDLE_STRUCT(xen_hvm_param); /* Hint from PV drivers for pagetable destruction. */ #define HVMOP_pagetable_dying 9 struct xen_hvm_pagetable_dying { /* Domain with a pagetable about to be destroyed. */ domid_t domid; /* guest physical address of the toplevel pagetable dying */ aligned_u64 gpa; }; typedef struct xen_hvm_pagetable_dying xen_hvm_pagetable_dying_t; DEFINE_GUEST_HANDLE_STRUCT(xen_hvm_pagetable_dying_t); enum hvmmem_type_t { HVMMEM_ram_rw, /* Normal read/write guest RAM */ HVMMEM_ram_ro, /* Read-only; writes are discarded */ HVMMEM_mmio_dm, /* Reads and write go to the device model */ }; #define HVMOP_get_mem_type 15 /* Return hvmmem_type_t for the specified pfn. */ struct xen_hvm_get_mem_type { /* Domain to be queried. */ domid_t domid; /* OUT variable. */ uint16_t mem_type; uint16_t pad[2]; /* align next field on 8-byte boundary */ /* IN variable. */ uint64_t pfn; }; DEFINE_GUEST_HANDLE_STRUCT(xen_hvm_get_mem_type); #endif /* __XEN_PUBLIC_HVM_HVM_OP_H__ */ interface/hvm/start_info.h 0000644 00000016371 14722073410 0011627 0 ustar 00 /* * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (c) 2016, Citrix Systems, Inc. */ #ifndef __XEN_PUBLIC_ARCH_X86_HVM_START_INFO_H__ #define __XEN_PUBLIC_ARCH_X86_HVM_START_INFO_H__ /* * Start of day structure passed to PVH guests and to HVM guests in %ebx. * * NOTE: nothing will be loaded at physical address 0, so a 0 value in any * of the address fields should be treated as not present. * * 0 +----------------+ * | magic | Contains the magic value XEN_HVM_START_MAGIC_VALUE * | | ("xEn3" with the 0x80 bit of the "E" set). * 4 +----------------+ * | version | Version of this structure. Current version is 1. New * | | versions are guaranteed to be backwards-compatible. * 8 +----------------+ * | flags | SIF_xxx flags. * 12 +----------------+ * | nr_modules | Number of modules passed to the kernel. * 16 +----------------+ * | modlist_paddr | Physical address of an array of modules * | | (layout of the structure below). * 24 +----------------+ * | cmdline_paddr | Physical address of the command line, * | | a zero-terminated ASCII string. * 32 +----------------+ * | rsdp_paddr | Physical address of the RSDP ACPI data structure. * 40 +----------------+ * | memmap_paddr | Physical address of the (optional) memory map. Only * | | present in version 1 and newer of the structure. * 48 +----------------+ * | memmap_entries | Number of entries in the memory map table. Zero * | | if there is no memory map being provided. Only * | | present in version 1 and newer of the structure. * 52 +----------------+ * | reserved | Version 1 and newer only. * 56 +----------------+ * * The layout of each entry in the module structure is the following: * * 0 +----------------+ * | paddr | Physical address of the module. * 8 +----------------+ * | size | Size of the module in bytes. * 16 +----------------+ * | cmdline_paddr | Physical address of the command line, * | | a zero-terminated ASCII string. * 24 +----------------+ * | reserved | * 32 +----------------+ * * The layout of each entry in the memory map table is as follows: * * 0 +----------------+ * | addr | Base address * 8 +----------------+ * | size | Size of mapping in bytes * 16 +----------------+ * | type | Type of mapping as defined between the hypervisor * | | and guest. See XEN_HVM_MEMMAP_TYPE_* values below. * 20 +----------------| * | reserved | * 24 +----------------+ * * The address and sizes are always a 64bit little endian unsigned integer. * * NB: Xen on x86 will always try to place all the data below the 4GiB * boundary. * * Version numbers of the hvm_start_info structure have evolved like this: * * Version 0: Initial implementation. * * Version 1: Added the memmap_paddr/memmap_entries fields (plus 4 bytes of * padding) to the end of the hvm_start_info struct. These new * fields can be used to pass a memory map to the guest. The * memory map is optional and so guests that understand version 1 * of the structure must check that memmap_entries is non-zero * before trying to read the memory map. */ #define XEN_HVM_START_MAGIC_VALUE 0x336ec578 /* * The values used in the type field of the memory map table entries are * defined below and match the Address Range Types as defined in the "System * Address Map Interfaces" section of the ACPI Specification. Please refer to * section 15 in version 6.2 of the ACPI spec: http://uefi.org/specifications */ #define XEN_HVM_MEMMAP_TYPE_RAM 1 #define XEN_HVM_MEMMAP_TYPE_RESERVED 2 #define XEN_HVM_MEMMAP_TYPE_ACPI 3 #define XEN_HVM_MEMMAP_TYPE_NVS 4 #define XEN_HVM_MEMMAP_TYPE_UNUSABLE 5 #define XEN_HVM_MEMMAP_TYPE_DISABLED 6 #define XEN_HVM_MEMMAP_TYPE_PMEM 7 /* * C representation of the x86/HVM start info layout. * * The canonical definition of this layout is above, this is just a way to * represent the layout described there using C types. */ struct hvm_start_info { uint32_t magic; /* Contains the magic value 0x336ec578 */ /* ("xEn3" with the 0x80 bit of the "E" set).*/ uint32_t version; /* Version of this structure. */ uint32_t flags; /* SIF_xxx flags. */ uint32_t nr_modules; /* Number of modules passed to the kernel. */ uint64_t modlist_paddr; /* Physical address of an array of */ /* hvm_modlist_entry. */ uint64_t cmdline_paddr; /* Physical address of the command line. */ uint64_t rsdp_paddr; /* Physical address of the RSDP ACPI data */ /* structure. */ /* All following fields only present in version 1 and newer */ uint64_t memmap_paddr; /* Physical address of an array of */ /* hvm_memmap_table_entry. */ uint32_t memmap_entries; /* Number of entries in the memmap table. */ /* Value will be zero if there is no memory */ /* map being provided. */ uint32_t reserved; /* Must be zero. */ }; struct hvm_modlist_entry { uint64_t paddr; /* Physical address of the module. */ uint64_t size; /* Size of the module in bytes. */ uint64_t cmdline_paddr; /* Physical address of the command line. */ uint64_t reserved; }; struct hvm_memmap_table_entry { uint64_t addr; /* Base address of the memory region */ uint64_t size; /* Size of the memory region in bytes */ uint32_t type; /* Mapping type */ uint32_t reserved; /* Must be zero for Version 1. */ }; #endif /* __XEN_PUBLIC_ARCH_X86_HVM_START_INFO_H__ */ interface/hvm/dm_op.h 0000644 00000002504 14722073410 0010546 0 ustar 00 /* * Copyright (c) 2016, Citrix Systems Inc * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. */ #ifndef __XEN_PUBLIC_HVM_DM_OP_H__ #define __XEN_PUBLIC_HVM_DM_OP_H__ struct xen_dm_op_buf { GUEST_HANDLE(void) h; xen_ulong_t size; }; DEFINE_GUEST_HANDLE_STRUCT(xen_dm_op_buf); #endif /* __XEN_PUBLIC_HVM_DM_OP_H__ */ interface/platform.h 0000644 00000040056 14722073410 0010506 0 ustar 00 /****************************************************************************** * platform.h * * Hardware platform operations. Intended for use by domain-0 kernel. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to * deal in the Software without restriction, including without limitation the * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or * sell copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * Copyright (c) 2002-2006, K Fraser */ #ifndef __XEN_PUBLIC_PLATFORM_H__ #define __XEN_PUBLIC_PLATFORM_H__ #include <xen/interface/xen.h> #define XENPF_INTERFACE_VERSION 0x03000001 /* * Set clock such that it would read <secs,nsecs> after 00:00:00 UTC, * 1 January, 1970 if the current system time was <system_time>. */ #define XENPF_settime32 17 struct xenpf_settime32 { /* IN variables. */ uint32_t secs; uint32_t nsecs; uint64_t system_time; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime32_t); #define XENPF_settime64 62 struct xenpf_settime64 { /* IN variables. */ uint64_t secs; uint32_t nsecs; uint32_t mbz; uint64_t system_time; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_settime64_t); /* * Request memory range (@mfn, @mfn+@nr_mfns-1) to have type @type. * On x86, @type is an architecture-defined MTRR memory type. * On success, returns the MTRR that was used (@reg) and a handle that can * be passed to XENPF_DEL_MEMTYPE to accurately tear down the new setting. * (x86-specific). */ #define XENPF_add_memtype 31 struct xenpf_add_memtype { /* IN variables. */ xen_pfn_t mfn; uint64_t nr_mfns; uint32_t type; /* OUT variables. */ uint32_t handle; uint32_t reg; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_add_memtype_t); /* * Tear down an existing memory-range type. If @handle is remembered then it * should be passed in to accurately tear down the correct setting (in case * of overlapping memory regions with differing types). If it is not known * then @handle should be set to zero. In all cases @reg must be set. * (x86-specific). */ #define XENPF_del_memtype 32 struct xenpf_del_memtype { /* IN variables. */ uint32_t handle; uint32_t reg; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_del_memtype_t); /* Read current type of an MTRR (x86-specific). */ #define XENPF_read_memtype 33 struct xenpf_read_memtype { /* IN variables. */ uint32_t reg; /* OUT variables. */ xen_pfn_t mfn; uint64_t nr_mfns; uint32_t type; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_read_memtype_t); #define XENPF_microcode_update 35 struct xenpf_microcode_update { /* IN variables. */ GUEST_HANDLE(void) data; /* Pointer to microcode data */ uint32_t length; /* Length of microcode data. */ }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_microcode_update_t); #define XENPF_platform_quirk 39 #define QUIRK_NOIRQBALANCING 1 /* Do not restrict IO-APIC RTE targets */ #define QUIRK_IOAPIC_BAD_REGSEL 2 /* IO-APIC REGSEL forgets its value */ #define QUIRK_IOAPIC_GOOD_REGSEL 3 /* IO-APIC REGSEL behaves properly */ struct xenpf_platform_quirk { /* IN variables. */ uint32_t quirk_id; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_platform_quirk_t); #define XENPF_efi_runtime_call 49 #define XEN_EFI_get_time 1 #define XEN_EFI_set_time 2 #define XEN_EFI_get_wakeup_time 3 #define XEN_EFI_set_wakeup_time 4 #define XEN_EFI_get_next_high_monotonic_count 5 #define XEN_EFI_get_variable 6 #define XEN_EFI_set_variable 7 #define XEN_EFI_get_next_variable_name 8 #define XEN_EFI_query_variable_info 9 #define XEN_EFI_query_capsule_capabilities 10 #define XEN_EFI_update_capsule 11 struct xenpf_efi_runtime_call { uint32_t function; /* * This field is generally used for per sub-function flags (defined * below), except for the XEN_EFI_get_next_high_monotonic_count case, * where it holds the single returned value. */ uint32_t misc; xen_ulong_t status; union { #define XEN_EFI_GET_TIME_SET_CLEARS_NS 0x00000001 struct { struct xenpf_efi_time { uint16_t year; uint8_t month; uint8_t day; uint8_t hour; uint8_t min; uint8_t sec; uint32_t ns; int16_t tz; uint8_t daylight; } time; uint32_t resolution; uint32_t accuracy; } get_time; struct xenpf_efi_time set_time; #define XEN_EFI_GET_WAKEUP_TIME_ENABLED 0x00000001 #define XEN_EFI_GET_WAKEUP_TIME_PENDING 0x00000002 struct xenpf_efi_time get_wakeup_time; #define XEN_EFI_SET_WAKEUP_TIME_ENABLE 0x00000001 #define XEN_EFI_SET_WAKEUP_TIME_ENABLE_ONLY 0x00000002 struct xenpf_efi_time set_wakeup_time; #define XEN_EFI_VARIABLE_NON_VOLATILE 0x00000001 #define XEN_EFI_VARIABLE_BOOTSERVICE_ACCESS 0x00000002 #define XEN_EFI_VARIABLE_RUNTIME_ACCESS 0x00000004 struct { GUEST_HANDLE(void) name; /* UCS-2/UTF-16 string */ xen_ulong_t size; GUEST_HANDLE(void) data; struct xenpf_efi_guid { uint32_t data1; uint16_t data2; uint16_t data3; uint8_t data4[8]; } vendor_guid; } get_variable, set_variable; struct { xen_ulong_t size; GUEST_HANDLE(void) name; /* UCS-2/UTF-16 string */ struct xenpf_efi_guid vendor_guid; } get_next_variable_name; struct { uint32_t attr; uint64_t max_store_size; uint64_t remain_store_size; uint64_t max_size; } query_variable_info; struct { GUEST_HANDLE(void) capsule_header_array; xen_ulong_t capsule_count; uint64_t max_capsule_size; uint32_t reset_type; } query_capsule_capabilities; struct { GUEST_HANDLE(void) capsule_header_array; xen_ulong_t capsule_count; uint64_t sg_list; /* machine address */ } update_capsule; } u; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_efi_runtime_call); #define XEN_FW_EFI_VERSION 0 #define XEN_FW_EFI_CONFIG_TABLE 1 #define XEN_FW_EFI_VENDOR 2 #define XEN_FW_EFI_MEM_INFO 3 #define XEN_FW_EFI_RT_VERSION 4 #define XENPF_firmware_info 50 #define XEN_FW_DISK_INFO 1 /* from int 13 AH=08/41/48 */ #define XEN_FW_DISK_MBR_SIGNATURE 2 /* from MBR offset 0x1b8 */ #define XEN_FW_VBEDDC_INFO 3 /* from int 10 AX=4f15 */ #define XEN_FW_EFI_INFO 4 /* from EFI */ #define XEN_FW_KBD_SHIFT_FLAGS 5 /* Int16, Fn02: Get keyboard shift flags. */ struct xenpf_firmware_info { /* IN variables. */ uint32_t type; uint32_t index; /* OUT variables. */ union { struct { /* Int13, Fn48: Check Extensions Present. */ uint8_t device; /* %dl: bios device number */ uint8_t version; /* %ah: major version */ uint16_t interface_support; /* %cx: support bitmap */ /* Int13, Fn08: Legacy Get Device Parameters. */ uint16_t legacy_max_cylinder; /* %cl[7:6]:%ch: max cyl # */ uint8_t legacy_max_head; /* %dh: max head # */ uint8_t legacy_sectors_per_track; /* %cl[5:0]: max sector # */ /* Int13, Fn41: Get Device Parameters (as filled into %ds:%esi). */ /* NB. First uint16_t of buffer must be set to buffer size. */ GUEST_HANDLE(void) edd_params; } disk_info; /* XEN_FW_DISK_INFO */ struct { uint8_t device; /* bios device number */ uint32_t mbr_signature; /* offset 0x1b8 in mbr */ } disk_mbr_signature; /* XEN_FW_DISK_MBR_SIGNATURE */ struct { /* Int10, AX=4F15: Get EDID info. */ uint8_t capabilities; uint8_t edid_transfer_time; /* must refer to 128-byte buffer */ GUEST_HANDLE(uchar) edid; } vbeddc_info; /* XEN_FW_VBEDDC_INFO */ union xenpf_efi_info { uint32_t version; struct { uint64_t addr; /* EFI_CONFIGURATION_TABLE */ uint32_t nent; } cfg; struct { uint32_t revision; uint32_t bufsz; /* input, in bytes */ GUEST_HANDLE(void) name; /* UCS-2/UTF-16 string */ } vendor; struct { uint64_t addr; uint64_t size; uint64_t attr; uint32_t type; } mem; } efi_info; /* XEN_FW_EFI_INFO */ uint8_t kbd_shift_flags; /* XEN_FW_KBD_SHIFT_FLAGS */ } u; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_firmware_info_t); #define XENPF_enter_acpi_sleep 51 struct xenpf_enter_acpi_sleep { /* IN variables */ uint16_t val_a; /* PM1a control / sleep type A. */ uint16_t val_b; /* PM1b control / sleep type B. */ uint32_t sleep_state; /* Which state to enter (Sn). */ #define XENPF_ACPI_SLEEP_EXTENDED 0x00000001 uint32_t flags; /* XENPF_ACPI_SLEEP_*. */ }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_enter_acpi_sleep_t); #define XENPF_change_freq 52 struct xenpf_change_freq { /* IN variables */ uint32_t flags; /* Must be zero. */ uint32_t cpu; /* Physical cpu. */ uint64_t freq; /* New frequency (Hz). */ }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_change_freq_t); /* * Get idle times (nanoseconds since boot) for physical CPUs specified in the * @cpumap_bitmap with range [0..@cpumap_nr_cpus-1]. The @idletime array is * indexed by CPU number; only entries with the corresponding @cpumap_bitmap * bit set are written to. On return, @cpumap_bitmap is modified so that any * non-existent CPUs are cleared. Such CPUs have their @idletime array entry * cleared. */ #define XENPF_getidletime 53 struct xenpf_getidletime { /* IN/OUT variables */ /* IN: CPUs to interrogate; OUT: subset of IN which are present */ GUEST_HANDLE(uchar) cpumap_bitmap; /* IN variables */ /* Size of cpumap bitmap. */ uint32_t cpumap_nr_cpus; /* Must be indexable for every cpu in cpumap_bitmap. */ GUEST_HANDLE(uint64_t) idletime; /* OUT variables */ /* System time when the idletime snapshots were taken. */ uint64_t now; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_getidletime_t); #define XENPF_set_processor_pminfo 54 /* ability bits */ #define XEN_PROCESSOR_PM_CX 1 #define XEN_PROCESSOR_PM_PX 2 #define XEN_PROCESSOR_PM_TX 4 /* cmd type */ #define XEN_PM_CX 0 #define XEN_PM_PX 1 #define XEN_PM_TX 2 #define XEN_PM_PDC 3 /* Px sub info type */ #define XEN_PX_PCT 1 #define XEN_PX_PSS 2 #define XEN_PX_PPC 4 #define XEN_PX_PSD 8 struct xen_power_register { uint32_t space_id; uint32_t bit_width; uint32_t bit_offset; uint32_t access_size; uint64_t address; }; struct xen_processor_csd { uint32_t domain; /* domain number of one dependent group */ uint32_t coord_type; /* coordination type */ uint32_t num; /* number of processors in same domain */ }; DEFINE_GUEST_HANDLE_STRUCT(xen_processor_csd); struct xen_processor_cx { struct xen_power_register reg; /* GAS for Cx trigger register */ uint8_t type; /* cstate value, c0: 0, c1: 1, ... */ uint32_t latency; /* worst latency (ms) to enter/exit this cstate */ uint32_t power; /* average power consumption(mW) */ uint32_t dpcnt; /* number of dependency entries */ GUEST_HANDLE(xen_processor_csd) dp; /* NULL if no dependency */ }; DEFINE_GUEST_HANDLE_STRUCT(xen_processor_cx); struct xen_processor_flags { uint32_t bm_control:1; uint32_t bm_check:1; uint32_t has_cst:1; uint32_t power_setup_done:1; uint32_t bm_rld_set:1; }; struct xen_processor_power { uint32_t count; /* number of C state entries in array below */ struct xen_processor_flags flags; /* global flags of this processor */ GUEST_HANDLE(xen_processor_cx) states; /* supported c states */ }; struct xen_pct_register { uint8_t descriptor; uint16_t length; uint8_t space_id; uint8_t bit_width; uint8_t bit_offset; uint8_t reserved; uint64_t address; }; struct xen_processor_px { uint64_t core_frequency; /* megahertz */ uint64_t power; /* milliWatts */ uint64_t transition_latency; /* microseconds */ uint64_t bus_master_latency; /* microseconds */ uint64_t control; /* control value */ uint64_t status; /* success indicator */ }; DEFINE_GUEST_HANDLE_STRUCT(xen_processor_px); struct xen_psd_package { uint64_t num_entries; uint64_t revision; uint64_t domain; uint64_t coord_type; uint64_t num_processors; }; struct xen_processor_performance { uint32_t flags; /* flag for Px sub info type */ uint32_t platform_limit; /* Platform limitation on freq usage */ struct xen_pct_register control_register; struct xen_pct_register status_register; uint32_t state_count; /* total available performance states */ GUEST_HANDLE(xen_processor_px) states; struct xen_psd_package domain_info; uint32_t shared_type; /* coordination type of this processor */ }; DEFINE_GUEST_HANDLE_STRUCT(xen_processor_performance); struct xenpf_set_processor_pminfo { /* IN variables */ uint32_t id; /* ACPI CPU ID */ uint32_t type; /* {XEN_PM_CX, XEN_PM_PX} */ union { struct xen_processor_power power;/* Cx: _CST/_CSD */ struct xen_processor_performance perf; /* Px: _PPC/_PCT/_PSS/_PSD */ GUEST_HANDLE(uint32_t) pdc; }; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_set_processor_pminfo); #define XENPF_get_cpuinfo 55 struct xenpf_pcpuinfo { /* IN */ uint32_t xen_cpuid; /* OUT */ /* The maxium cpu_id that is present */ uint32_t max_present; #define XEN_PCPU_FLAGS_ONLINE 1 /* Correponding xen_cpuid is not present*/ #define XEN_PCPU_FLAGS_INVALID 2 uint32_t flags; uint32_t apic_id; uint32_t acpi_id; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_pcpuinfo); #define XENPF_cpu_online 56 #define XENPF_cpu_offline 57 struct xenpf_cpu_ol { uint32_t cpuid; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_cpu_ol); #define XENPF_cpu_hotadd 58 struct xenpf_cpu_hotadd { uint32_t apic_id; uint32_t acpi_id; uint32_t pxm; }; #define XENPF_mem_hotadd 59 struct xenpf_mem_hotadd { uint64_t spfn; uint64_t epfn; uint32_t pxm; uint32_t flags; }; #define XENPF_core_parking 60 struct xenpf_core_parking { /* IN variables */ #define XEN_CORE_PARKING_SET 1 #define XEN_CORE_PARKING_GET 2 uint32_t type; /* IN variables: set cpu nums expected to be idled */ /* OUT variables: get cpu nums actually be idled */ uint32_t idle_nums; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_core_parking); #define XENPF_get_symbol 63 struct xenpf_symdata { /* IN/OUT variables */ uint32_t namelen; /* size of 'name' buffer */ /* IN/OUT variables */ uint32_t symnum; /* IN: Symbol to read */ /* OUT: Next available symbol. If same as IN */ /* then we reached the end */ /* OUT variables */ GUEST_HANDLE(char) name; uint64_t address; char type; }; DEFINE_GUEST_HANDLE_STRUCT(xenpf_symdata); struct xen_platform_op { uint32_t cmd; uint32_t interface_version; /* XENPF_INTERFACE_VERSION */ union { struct xenpf_settime32 settime32; struct xenpf_settime64 settime64; struct xenpf_add_memtype add_memtype; struct xenpf_del_memtype del_memtype; struct xenpf_read_memtype read_memtype; struct xenpf_microcode_update microcode; struct xenpf_platform_quirk platform_quirk; struct xenpf_efi_runtime_call efi_runtime_call; struct xenpf_firmware_info firmware_info; struct xenpf_enter_acpi_sleep enter_acpi_sleep; struct xenpf_change_freq change_freq; struct xenpf_getidletime getidletime; struct xenpf_set_processor_pminfo set_pminfo; struct xenpf_pcpuinfo pcpu_info; struct xenpf_cpu_ol cpu_ol; struct xenpf_cpu_hotadd cpu_add; struct xenpf_mem_hotadd mem_add; struct xenpf_core_parking core_parking; struct xenpf_symdata symdata; uint8_t pad[128]; } u; }; DEFINE_GUEST_HANDLE_STRUCT(xen_platform_op_t); #endif /* __XEN_PUBLIC_PLATFORM_H__ */ page.h 0000644 00000002541 14722073410 0005633 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _XEN_PAGE_H #define _XEN_PAGE_H #include <asm/page.h> /* The hypercall interface supports only 4KB page */ #define XEN_PAGE_SHIFT 12 #define XEN_PAGE_SIZE (_AC(1, UL) << XEN_PAGE_SHIFT) #define XEN_PAGE_MASK (~(XEN_PAGE_SIZE-1)) #define xen_offset_in_page(p) ((unsigned long)(p) & ~XEN_PAGE_MASK) /* * We assume that PAGE_SIZE is a multiple of XEN_PAGE_SIZE * XXX: Add a BUILD_BUG_ON? */ #define xen_pfn_to_page(xen_pfn) \ (pfn_to_page((unsigned long)(xen_pfn) >> (PAGE_SHIFT - XEN_PAGE_SHIFT))) #define page_to_xen_pfn(page) \ ((page_to_pfn(page)) << (PAGE_SHIFT - XEN_PAGE_SHIFT)) #define XEN_PFN_PER_PAGE (PAGE_SIZE / XEN_PAGE_SIZE) #define XEN_PFN_DOWN(x) ((x) >> XEN_PAGE_SHIFT) #define XEN_PFN_UP(x) (((x) + XEN_PAGE_SIZE-1) >> XEN_PAGE_SHIFT) #define XEN_PFN_PHYS(x) ((phys_addr_t)(x) << XEN_PAGE_SHIFT) #include <asm/xen/page.h> /* Return the GFN associated to the first 4KB of the page */ static inline unsigned long xen_page_to_gfn(struct page *page) { return pfn_to_gfn(page_to_xen_pfn(page)); } struct xen_memory_region { unsigned long start_pfn; unsigned long n_pfns; }; #define XEN_EXTRA_MEM_MAX_REGIONS 128 /* == E820_MAX_ENTRIES_ZEROPAGE */ extern __initdata struct xen_memory_region xen_extra_mem[XEN_EXTRA_MEM_MAX_REGIONS]; extern unsigned long xen_released_pages; #endif /* _XEN_PAGE_H */ hvm.h 0000644 00000002451 14722073410 0005511 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /* Simple wrappers around HVM functions */ #ifndef XEN_HVM_H__ #define XEN_HVM_H__ #include <xen/interface/hvm/params.h> #include <asm/xen/hypercall.h> static const char *param_name(int op) { #define PARAM(x) [HVM_PARAM_##x] = #x static const char *const names[] = { PARAM(CALLBACK_IRQ), PARAM(STORE_PFN), PARAM(STORE_EVTCHN), PARAM(PAE_ENABLED), PARAM(IOREQ_PFN), PARAM(BUFIOREQ_PFN), PARAM(TIMER_MODE), PARAM(HPET_ENABLED), PARAM(IDENT_PT), PARAM(DM_DOMAIN), PARAM(ACPI_S_STATE), PARAM(VM86_TSS), PARAM(VPT_ALIGN), PARAM(CONSOLE_PFN), PARAM(CONSOLE_EVTCHN), }; #undef PARAM if (op >= ARRAY_SIZE(names)) return "unknown"; if (!names[op]) return "reserved"; return names[op]; } static inline int hvm_get_parameter(int idx, uint64_t *value) { struct xen_hvm_param xhv; int r; xhv.domid = DOMID_SELF; xhv.index = idx; r = HYPERVISOR_hvm_op(HVMOP_get_param, &xhv); if (r < 0) { pr_err("Cannot get hvm parameter %s (%d): %d!\n", param_name(idx), idx, r); return r; } *value = xhv.value; return r; } #define HVM_CALLBACK_VIA_TYPE_VECTOR 0x2 #define HVM_CALLBACK_VIA_TYPE_SHIFT 56 #define HVM_CALLBACK_VECTOR(x) (((uint64_t)HVM_CALLBACK_VIA_TYPE_VECTOR)<<\ HVM_CALLBACK_VIA_TYPE_SHIFT | (x)) #endif /* XEN_HVM_H__ */ xenbus_dev.h 0000644 00000003466 14722073410 0007070 0 ustar 00 /****************************************************************************** * evtchn.h * * Interface to /dev/xen/xenbus_backend. * * Copyright (c) 2011 Bastian Blank <waldi@debian.org> * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License version 2 * as published by the Free Software Foundation; or, when distributed * separately from the Linux kernel or incorporated into other * software packages, subject to the following license: * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this source file (the "Software"), to deal in the Software without * restriction, including without limitation the rights to use, copy, modify, * merge, publish, distribute, sublicense, and/or sell copies of the Software, * and to permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef __LINUX_XEN_XENBUS_DEV_H__ #define __LINUX_XEN_XENBUS_DEV_H__ #include <linux/ioctl.h> #define IOCTL_XENBUS_BACKEND_EVTCHN \ _IOC(_IOC_NONE, 'B', 0, 0) #define IOCTL_XENBUS_BACKEND_SETUP \ _IOC(_IOC_NONE, 'B', 1, 0) #endif /* __LINUX_XEN_XENBUS_DEV_H__ */ mem-reservation.h 0000644 00000002763 14722073410 0010042 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /* * Xen memory reservation utilities. * * Copyright (c) 2003, B Dragovic * Copyright (c) 2003-2004, M Williamson, K Fraser * Copyright (c) 2005 Dan M. Smith, IBM Corporation * Copyright (c) 2010 Daniel Kiper * Copyright (c) 2018 Oleksandr Andrushchenko, EPAM Systems Inc. */ #ifndef _XENMEM_RESERVATION_H #define _XENMEM_RESERVATION_H #include <linux/highmem.h> #include <xen/page.h> extern bool xen_scrub_pages; static inline void xenmem_reservation_scrub_page(struct page *page) { if (xen_scrub_pages) clear_highpage(page); } #ifdef CONFIG_XEN_HAVE_PVMMU void __xenmem_reservation_va_mapping_update(unsigned long count, struct page **pages, xen_pfn_t *frames); void __xenmem_reservation_va_mapping_reset(unsigned long count, struct page **pages); #endif static inline void xenmem_reservation_va_mapping_update(unsigned long count, struct page **pages, xen_pfn_t *frames) { #ifdef CONFIG_XEN_HAVE_PVMMU if (!xen_feature(XENFEAT_auto_translated_physmap)) __xenmem_reservation_va_mapping_update(count, pages, frames); #endif } static inline void xenmem_reservation_va_mapping_reset(unsigned long count, struct page **pages) { #ifdef CONFIG_XEN_HAVE_PVMMU if (!xen_feature(XENFEAT_auto_translated_physmap)) __xenmem_reservation_va_mapping_reset(count, pages); #endif } int xenmem_reservation_increase(int count, xen_pfn_t *frames); int xenmem_reservation_decrease(int count, xen_pfn_t *frames); #endif arm/page.h 0000644 00000005550 14722073410 0006415 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _ASM_ARM_XEN_PAGE_H #define _ASM_ARM_XEN_PAGE_H #include <asm/page.h> #include <asm/pgtable.h> #include <linux/pfn.h> #include <linux/types.h> #include <linux/dma-mapping.h> #include <xen/xen.h> #include <xen/interface/grant_table.h> #define phys_to_machine_mapping_valid(pfn) (1) /* Xen machine address */ typedef struct xmaddr { phys_addr_t maddr; } xmaddr_t; /* Xen pseudo-physical address */ typedef struct xpaddr { phys_addr_t paddr; } xpaddr_t; #define XMADDR(x) ((xmaddr_t) { .maddr = (x) }) #define XPADDR(x) ((xpaddr_t) { .paddr = (x) }) #define INVALID_P2M_ENTRY (~0UL) /* * The pseudo-physical frame (pfn) used in all the helpers is always based * on Xen page granularity (i.e 4KB). * * A Linux page may be split across multiple non-contiguous Xen page so we * have to keep track with frame based on 4KB page granularity. * * PV drivers should never make a direct usage of those helpers (particularly * pfn_to_gfn and gfn_to_pfn). */ unsigned long __pfn_to_mfn(unsigned long pfn); extern struct rb_root phys_to_mach; /* Pseudo-physical <-> Guest conversion */ static inline unsigned long pfn_to_gfn(unsigned long pfn) { return pfn; } static inline unsigned long gfn_to_pfn(unsigned long gfn) { return gfn; } /* Pseudo-physical <-> BUS conversion */ static inline unsigned long pfn_to_bfn(unsigned long pfn) { unsigned long mfn; if (phys_to_mach.rb_node != NULL) { mfn = __pfn_to_mfn(pfn); if (mfn != INVALID_P2M_ENTRY) return mfn; } return pfn; } static inline unsigned long bfn_to_pfn(unsigned long bfn) { return bfn; } #define bfn_to_local_pfn(bfn) bfn_to_pfn(bfn) /* VIRT <-> GUEST conversion */ #define virt_to_gfn(v) (pfn_to_gfn(virt_to_phys(v) >> XEN_PAGE_SHIFT)) #define gfn_to_virt(m) (__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT)) /* Only used in PV code. But ARM guests are always HVM. */ static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr) { BUG(); } extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops, struct gnttab_map_grant_ref *kmap_ops, struct page **pages, unsigned int count); extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops, struct gnttab_unmap_grant_ref *kunmap_ops, struct page **pages, unsigned int count); bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn); bool __set_phys_to_machine_multi(unsigned long pfn, unsigned long mfn, unsigned long nr_pages); static inline bool set_phys_to_machine(unsigned long pfn, unsigned long mfn) { return __set_phys_to_machine(pfn, mfn); } #define xen_remap(cookie, size) ioremap_cache((cookie), (size)) #define xen_unmap(cookie) iounmap((cookie)) bool xen_arch_need_swiotlb(struct device *dev, phys_addr_t phys, dma_addr_t dev_addr); unsigned long xen_get_swiotlb_free_pages(unsigned int order); #endif /* _ASM_ARM_XEN_PAGE_H */ arm/hypervisor.h 0000644 00000001406 14722073410 0007707 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _ASM_ARM_XEN_HYPERVISOR_H #define _ASM_ARM_XEN_HYPERVISOR_H #include <linux/init.h> extern struct shared_info *HYPERVISOR_shared_info; extern struct start_info *xen_start_info; /* Lazy mode for batching updates / context switch */ enum paravirt_lazy_mode { PARAVIRT_LAZY_NONE, PARAVIRT_LAZY_MMU, PARAVIRT_LAZY_CPU, }; static inline enum paravirt_lazy_mode paravirt_get_lazy_mode(void) { return PARAVIRT_LAZY_NONE; } #ifdef CONFIG_XEN void __init xen_early_init(void); #else static inline void xen_early_init(void) { return; } #endif #ifdef CONFIG_HOTPLUG_CPU static inline void xen_arch_register_cpu(int num) { } static inline void xen_arch_unregister_cpu(int num) { } #endif #endif /* _ASM_ARM_XEN_HYPERVISOR_H */ arm/hypercall.h 0000644 00000006606 14722073410 0007467 0 ustar 00 /****************************************************************************** * hypercall.h * * Linux-specific hypervisor handling. * * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012 * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License version 2 * as published by the Free Software Foundation; or, when distributed * separately from the Linux kernel or incorporated into other * software packages, subject to the following license: * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this source file (the "Software"), to deal in the Software without * restriction, including without limitation the rights to use, copy, modify, * merge, publish, distribute, sublicense, and/or sell copies of the Software, * and to permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef _ASM_ARM_XEN_HYPERCALL_H #define _ASM_ARM_XEN_HYPERCALL_H #include <linux/bug.h> #include <xen/interface/xen.h> #include <xen/interface/sched.h> #include <xen/interface/platform.h> struct xen_dm_op_buf; long privcmd_call(unsigned call, unsigned long a1, unsigned long a2, unsigned long a3, unsigned long a4, unsigned long a5); int HYPERVISOR_xen_version(int cmd, void *arg); int HYPERVISOR_console_io(int cmd, int count, char *str); int HYPERVISOR_grant_table_op(unsigned int cmd, void *uop, unsigned int count); int HYPERVISOR_sched_op(int cmd, void *arg); int HYPERVISOR_event_channel_op(int cmd, void *arg); unsigned long HYPERVISOR_hvm_op(int op, void *arg); int HYPERVISOR_memory_op(unsigned int cmd, void *arg); int HYPERVISOR_physdev_op(int cmd, void *arg); int HYPERVISOR_vcpu_op(int cmd, int vcpuid, void *extra_args); int HYPERVISOR_tmem_op(void *arg); int HYPERVISOR_vm_assist(unsigned int cmd, unsigned int type); int HYPERVISOR_dm_op(domid_t domid, unsigned int nr_bufs, struct xen_dm_op_buf *bufs); int HYPERVISOR_platform_op_raw(void *arg); static inline int HYPERVISOR_platform_op(struct xen_platform_op *op) { op->interface_version = XENPF_INTERFACE_VERSION; return HYPERVISOR_platform_op_raw(op); } int HYPERVISOR_multicall(struct multicall_entry *calls, uint32_t nr); static inline int HYPERVISOR_suspend(unsigned long start_info_mfn) { struct sched_shutdown r = { .reason = SHUTDOWN_suspend }; /* start_info_mfn is unused on ARM */ return HYPERVISOR_sched_op(SCHEDOP_shutdown, &r); } static inline void MULTI_update_va_mapping(struct multicall_entry *mcl, unsigned long va, unsigned int new_val, unsigned long flags) { BUG(); } static inline void MULTI_mmu_update(struct multicall_entry *mcl, struct mmu_update *req, int count, int *success_count, domid_t domid) { BUG(); } #endif /* _ASM_ARM_XEN_HYPERCALL_H */ arm/page-coherent.h 0000644 00000001145 14722073410 0010216 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _XEN_ARM_PAGE_COHERENT_H #define _XEN_ARM_PAGE_COHERENT_H #include <linux/dma-mapping.h> #include <asm/page.h> static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs) { return dma_direct_alloc(hwdev, size, dma_handle, flags, attrs); } static inline void xen_free_coherent_pages(struct device *hwdev, size_t size, void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs) { dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs); } #endif /* _XEN_ARM_PAGE_COHERENT_H */ arm/interface.h 0000644 00000005041 14722073410 0007434 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /****************************************************************************** * Guest OS interface to ARM Xen. * * Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Citrix, 2012 */ #ifndef _ASM_ARM_XEN_INTERFACE_H #define _ASM_ARM_XEN_INTERFACE_H #include <linux/types.h> #define uint64_aligned_t uint64_t __attribute__((aligned(8))) #define __DEFINE_GUEST_HANDLE(name, type) \ typedef struct { union { type *p; uint64_aligned_t q; }; } \ __guest_handle_ ## name #define DEFINE_GUEST_HANDLE_STRUCT(name) \ __DEFINE_GUEST_HANDLE(name, struct name) #define DEFINE_GUEST_HANDLE(name) __DEFINE_GUEST_HANDLE(name, name) #define GUEST_HANDLE(name) __guest_handle_ ## name #define set_xen_guest_handle(hnd, val) \ do { \ if (sizeof(hnd) == 8) \ *(uint64_t *)&(hnd) = 0; \ (hnd).p = val; \ } while (0) #define __HYPERVISOR_platform_op_raw __HYPERVISOR_platform_op #ifndef __ASSEMBLY__ /* Explicitly size integers that represent pfns in the interface with * Xen so that we can have one ABI that works for 32 and 64 bit guests. * Note that this means that the xen_pfn_t type may be capable of * representing pfn's which the guest cannot represent in its own pfn * type. However since pfn space is controlled by the guest this is * fine since it simply wouldn't be able to create any sure pfns in * the first place. */ typedef uint64_t xen_pfn_t; #define PRI_xen_pfn "llx" typedef uint64_t xen_ulong_t; #define PRI_xen_ulong "llx" typedef int64_t xen_long_t; #define PRI_xen_long "llx" /* Guest handles for primitive C types. */ __DEFINE_GUEST_HANDLE(uchar, unsigned char); __DEFINE_GUEST_HANDLE(uint, unsigned int); DEFINE_GUEST_HANDLE(char); DEFINE_GUEST_HANDLE(int); DEFINE_GUEST_HANDLE(void); DEFINE_GUEST_HANDLE(uint64_t); DEFINE_GUEST_HANDLE(uint32_t); DEFINE_GUEST_HANDLE(xen_pfn_t); DEFINE_GUEST_HANDLE(xen_ulong_t); /* Maximum number of virtual CPUs in multi-processor guests. */ #define MAX_VIRT_CPUS 1 struct arch_vcpu_info { }; struct arch_shared_info { }; /* TODO: Move pvclock definitions some place arch independent */ struct pvclock_vcpu_time_info { u32 version; u32 pad0; u64 tsc_timestamp; u64 system_time; u32 tsc_to_system_mul; s8 tsc_shift; u8 flags; u8 pad[2]; } __attribute__((__packed__)); /* 32 bytes */ /* It is OK to have a 12 bytes struct with no padding because it is packed */ struct pvclock_wall_clock { u32 version; u32 sec; u32 nsec; u32 sec_hi; } __attribute__((__packed__)); #endif #endif /* _ASM_ARM_XEN_INTERFACE_H */ xen-ops.h 0000644 00000015774 14722073410 0006324 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef INCLUDE_XEN_OPS_H #define INCLUDE_XEN_OPS_H #include <linux/percpu.h> #include <linux/notifier.h> #include <linux/efi.h> #include <xen/features.h> #include <asm/xen/interface.h> #include <xen/interface/vcpu.h> DECLARE_PER_CPU(struct vcpu_info *, xen_vcpu); DECLARE_PER_CPU(uint32_t, xen_vcpu_id); static inline uint32_t xen_vcpu_nr(int cpu) { return per_cpu(xen_vcpu_id, cpu); } #define XEN_VCPU_ID_INVALID U32_MAX void xen_arch_pre_suspend(void); void xen_arch_post_suspend(int suspend_cancelled); void xen_timer_resume(void); void xen_arch_resume(void); void xen_arch_suspend(void); void xen_reboot(int reason); void xen_resume_notifier_register(struct notifier_block *nb); void xen_resume_notifier_unregister(struct notifier_block *nb); bool xen_vcpu_stolen(int vcpu); void xen_setup_runstate_info(int cpu); void xen_time_setup_guest(void); void xen_manage_runstate_time(int action); void xen_get_runstate_snapshot(struct vcpu_runstate_info *res); u64 xen_steal_clock(int cpu); int xen_setup_shutdown_event(void); extern unsigned long *xen_contiguous_bitmap; #if defined(CONFIG_XEN_PV) || defined(CONFIG_ARM) || defined(CONFIG_ARM64) int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order, unsigned int address_bits, dma_addr_t *dma_handle); void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order); #else static inline int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order, unsigned int address_bits, dma_addr_t *dma_handle) { return 0; } static inline void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order) { } #endif #if defined(CONFIG_XEN_PV) int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr, xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot, unsigned int domid, bool no_translate, struct page **pages); #else static inline int xen_remap_pfn(struct vm_area_struct *vma, unsigned long addr, xen_pfn_t *pfn, int nr, int *err_ptr, pgprot_t prot, unsigned int domid, bool no_translate, struct page **pages) { BUG(); return 0; } #endif struct vm_area_struct; #ifdef CONFIG_XEN_AUTO_XLATE int xen_xlate_remap_gfn_array(struct vm_area_struct *vma, unsigned long addr, xen_pfn_t *gfn, int nr, int *err_ptr, pgprot_t prot, unsigned int domid, struct page **pages); int xen_xlate_unmap_gfn_range(struct vm_area_struct *vma, int nr, struct page **pages); #else /* * These two functions are called from arch/x86/xen/mmu.c and so stubs * are needed for a configuration not specifying CONFIG_XEN_AUTO_XLATE. */ static inline int xen_xlate_remap_gfn_array(struct vm_area_struct *vma, unsigned long addr, xen_pfn_t *gfn, int nr, int *err_ptr, pgprot_t prot, unsigned int domid, struct page **pages) { return -EOPNOTSUPP; } static inline int xen_xlate_unmap_gfn_range(struct vm_area_struct *vma, int nr, struct page **pages) { return -EOPNOTSUPP; } #endif int xen_remap_vma_range(struct vm_area_struct *vma, unsigned long addr, unsigned long len); /* * xen_remap_domain_gfn_array() - map an array of foreign frames by gfn * @vma: VMA to map the pages into * @addr: Address at which to map the pages * @gfn: Array of GFNs to map * @nr: Number entries in the GFN array * @err_ptr: Returns per-GFN error status. * @prot: page protection mask * @domid: Domain owning the pages * @pages: Array of pages if this domain has an auto-translated physmap * * @gfn and @err_ptr may point to the same buffer, the GFNs will be * overwritten by the error codes after they are mapped. * * Returns the number of successfully mapped frames, or a -ve error * code. */ static inline int xen_remap_domain_gfn_array(struct vm_area_struct *vma, unsigned long addr, xen_pfn_t *gfn, int nr, int *err_ptr, pgprot_t prot, unsigned int domid, struct page **pages) { if (xen_feature(XENFEAT_auto_translated_physmap)) return xen_xlate_remap_gfn_array(vma, addr, gfn, nr, err_ptr, prot, domid, pages); /* We BUG_ON because it's a programmer error to pass a NULL err_ptr, * and the consequences later is quite hard to detect what the actual * cause of "wrong memory was mapped in". */ BUG_ON(err_ptr == NULL); return xen_remap_pfn(vma, addr, gfn, nr, err_ptr, prot, domid, false, pages); } /* * xen_remap_domain_mfn_array() - map an array of foreign frames by mfn * @vma: VMA to map the pages into * @addr: Address at which to map the pages * @mfn: Array of MFNs to map * @nr: Number entries in the MFN array * @err_ptr: Returns per-MFN error status. * @prot: page protection mask * @domid: Domain owning the pages * @pages: Array of pages if this domain has an auto-translated physmap * * @mfn and @err_ptr may point to the same buffer, the MFNs will be * overwritten by the error codes after they are mapped. * * Returns the number of successfully mapped frames, or a -ve error * code. */ static inline int xen_remap_domain_mfn_array(struct vm_area_struct *vma, unsigned long addr, xen_pfn_t *mfn, int nr, int *err_ptr, pgprot_t prot, unsigned int domid, struct page **pages) { if (xen_feature(XENFEAT_auto_translated_physmap)) return -EOPNOTSUPP; return xen_remap_pfn(vma, addr, mfn, nr, err_ptr, prot, domid, true, pages); } /* xen_remap_domain_gfn_range() - map a range of foreign frames * @vma: VMA to map the pages into * @addr: Address at which to map the pages * @gfn: First GFN to map. * @nr: Number frames to map * @prot: page protection mask * @domid: Domain owning the pages * @pages: Array of pages if this domain has an auto-translated physmap * * Returns the number of successfully mapped frames, or a -ve error * code. */ static inline int xen_remap_domain_gfn_range(struct vm_area_struct *vma, unsigned long addr, xen_pfn_t gfn, int nr, pgprot_t prot, unsigned int domid, struct page **pages) { if (xen_feature(XENFEAT_auto_translated_physmap)) return -EOPNOTSUPP; return xen_remap_pfn(vma, addr, &gfn, nr, NULL, prot, domid, false, pages); } int xen_unmap_domain_gfn_range(struct vm_area_struct *vma, int numpgs, struct page **pages); int xen_xlate_map_ballooned_pages(xen_pfn_t **pfns, void **vaddr, unsigned long nr_grant_frames); bool xen_running_on_version_or_later(unsigned int major, unsigned int minor); void xen_efi_runtime_setup(void); #ifdef CONFIG_PREEMPT static inline void xen_preemptible_hcall_begin(void) { } static inline void xen_preemptible_hcall_end(void) { } #else DECLARE_PER_CPU(bool, xen_in_preemptible_hcall); static inline void xen_preemptible_hcall_begin(void) { __this_cpu_write(xen_in_preemptible_hcall, true); } static inline void xen_preemptible_hcall_end(void) { __this_cpu_write(xen_in_preemptible_hcall, false); } #endif /* CONFIG_PREEMPT */ #endif /* INCLUDE_XEN_OPS_H */ balloon.h 0000644 00000001715 14722073410 0006347 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /****************************************************************************** * Xen balloon functionality */ #define RETRY_UNLIMITED 0 struct balloon_stats { /* We aim for 'current allocation' == 'target allocation'. */ unsigned long current_pages; unsigned long target_pages; unsigned long target_unpopulated; /* Number of pages in high- and low-memory balloons. */ unsigned long balloon_low; unsigned long balloon_high; unsigned long total_pages; unsigned long schedule_delay; unsigned long max_schedule_delay; unsigned long retry_count; unsigned long max_retry_count; }; extern struct balloon_stats balloon_stats; void balloon_set_new_target(unsigned long target); int alloc_xenballooned_pages(int nr_pages, struct page **pages); void free_xenballooned_pages(int nr_pages, struct page **pages); #ifdef CONFIG_XEN_BALLOON void xen_balloon_init(void); #else static inline void xen_balloon_init(void) { } #endif xenbus.h 0000644 00000021054 14722073410 0006223 0 ustar 00 /****************************************************************************** * xenbus.h * * Talks to Xen Store to figure out what devices we have. * * Copyright (C) 2005 Rusty Russell, IBM Corporation * Copyright (C) 2005 XenSource Ltd. * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License version 2 * as published by the Free Software Foundation; or, when distributed * separately from the Linux kernel or incorporated into other * software packages, subject to the following license: * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this source file (the "Software"), to deal in the Software without * restriction, including without limitation the rights to use, copy, modify, * merge, publish, distribute, sublicense, and/or sell copies of the Software, * and to permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef _XEN_XENBUS_H #define _XEN_XENBUS_H #include <linux/device.h> #include <linux/notifier.h> #include <linux/mutex.h> #include <linux/export.h> #include <linux/fs.h> #include <linux/completion.h> #include <linux/init.h> #include <linux/slab.h> #include <xen/interface/xen.h> #include <xen/interface/grant_table.h> #include <xen/interface/io/xenbus.h> #include <xen/interface/io/xs_wire.h> #define XENBUS_MAX_RING_GRANT_ORDER 4 #define XENBUS_MAX_RING_GRANTS (1U << XENBUS_MAX_RING_GRANT_ORDER) #define INVALID_GRANT_HANDLE (~0U) /* Register callback to watch this node. */ struct xenbus_watch { struct list_head list; /* Path being watched. */ const char *node; unsigned int nr_pending; /* * Called just before enqueing new event while a spinlock is held. * The event will be discarded if this callback returns false. */ bool (*will_handle)(struct xenbus_watch *, const char *path, const char *token); /* Callback (executed in a process context with no locks held). */ void (*callback)(struct xenbus_watch *, const char *path, const char *token); }; /* A xenbus device. */ struct xenbus_device { const char *devicetype; const char *nodename; const char *otherend; int otherend_id; struct xenbus_watch otherend_watch; struct device dev; enum xenbus_state state; struct completion down; struct work_struct work; }; static inline struct xenbus_device *to_xenbus_device(struct device *dev) { return container_of(dev, struct xenbus_device, dev); } struct xenbus_device_id { /* .../device/<device_type>/<identifier> */ char devicetype[32]; /* General class of device. */ }; /* A xenbus driver. */ struct xenbus_driver { const char *name; /* defaults to ids[0].devicetype */ const struct xenbus_device_id *ids; int (*probe)(struct xenbus_device *dev, const struct xenbus_device_id *id); void (*otherend_changed)(struct xenbus_device *dev, enum xenbus_state backend_state); int (*remove)(struct xenbus_device *dev); int (*suspend)(struct xenbus_device *dev); int (*resume)(struct xenbus_device *dev); int (*uevent)(struct xenbus_device *, struct kobj_uevent_env *); struct device_driver driver; int (*read_otherend_details)(struct xenbus_device *dev); int (*is_ready)(struct xenbus_device *dev); }; static inline struct xenbus_driver *to_xenbus_driver(struct device_driver *drv) { return container_of(drv, struct xenbus_driver, driver); } int __must_check __xenbus_register_frontend(struct xenbus_driver *drv, struct module *owner, const char *mod_name); int __must_check __xenbus_register_backend(struct xenbus_driver *drv, struct module *owner, const char *mod_name); #define xenbus_register_frontend(drv) \ __xenbus_register_frontend(drv, THIS_MODULE, KBUILD_MODNAME) #define xenbus_register_backend(drv) \ __xenbus_register_backend(drv, THIS_MODULE, KBUILD_MODNAME) void xenbus_unregister_driver(struct xenbus_driver *drv); struct xenbus_transaction { u32 id; }; /* Nil transaction ID. */ #define XBT_NIL ((struct xenbus_transaction) { 0 }) char **xenbus_directory(struct xenbus_transaction t, const char *dir, const char *node, unsigned int *num); void *xenbus_read(struct xenbus_transaction t, const char *dir, const char *node, unsigned int *len); int xenbus_write(struct xenbus_transaction t, const char *dir, const char *node, const char *string); int xenbus_mkdir(struct xenbus_transaction t, const char *dir, const char *node); int xenbus_exists(struct xenbus_transaction t, const char *dir, const char *node); int xenbus_rm(struct xenbus_transaction t, const char *dir, const char *node); int xenbus_transaction_start(struct xenbus_transaction *t); int xenbus_transaction_end(struct xenbus_transaction t, int abort); /* Single read and scanf: returns -errno or num scanned if > 0. */ __scanf(4, 5) int xenbus_scanf(struct xenbus_transaction t, const char *dir, const char *node, const char *fmt, ...); /* Read an (optional) unsigned value. */ unsigned int xenbus_read_unsigned(const char *dir, const char *node, unsigned int default_val); /* Single printf and write: returns -errno or 0. */ __printf(4, 5) int xenbus_printf(struct xenbus_transaction t, const char *dir, const char *node, const char *fmt, ...); /* Generic read function: NULL-terminated triples of name, * sprintf-style type string, and pointer. Returns 0 or errno.*/ int xenbus_gather(struct xenbus_transaction t, const char *dir, ...); /* notifer routines for when the xenstore comes up */ extern int xenstored_ready; int register_xenstore_notifier(struct notifier_block *nb); void unregister_xenstore_notifier(struct notifier_block *nb); int register_xenbus_watch(struct xenbus_watch *watch); void unregister_xenbus_watch(struct xenbus_watch *watch); void xs_suspend(void); void xs_resume(void); void xs_suspend_cancel(void); struct work_struct; #define XENBUS_IS_ERR_READ(str) ({ \ if (!IS_ERR(str) && strlen(str) == 0) { \ kfree(str); \ str = ERR_PTR(-ERANGE); \ } \ IS_ERR(str); \ }) #define XENBUS_EXIST_ERR(err) ((err) == -ENOENT || (err) == -ERANGE) int xenbus_watch_path(struct xenbus_device *dev, const char *path, struct xenbus_watch *watch, bool (*will_handle)(struct xenbus_watch *, const char *, const char *), void (*callback)(struct xenbus_watch *, const char *, const char *)); __printf(5, 6) int xenbus_watch_pathfmt(struct xenbus_device *dev, struct xenbus_watch *watch, bool (*will_handle)(struct xenbus_watch *, const char *, const char *), void (*callback)(struct xenbus_watch *, const char *, const char *), const char *pathfmt, ...); int xenbus_switch_state(struct xenbus_device *dev, enum xenbus_state new_state); int xenbus_grant_ring(struct xenbus_device *dev, void *vaddr, unsigned int nr_pages, grant_ref_t *grefs); int xenbus_map_ring_valloc(struct xenbus_device *dev, grant_ref_t *gnt_refs, unsigned int nr_grefs, void **vaddr); int xenbus_map_ring(struct xenbus_device *dev, grant_ref_t *gnt_refs, unsigned int nr_grefs, grant_handle_t *handles, unsigned long *vaddrs, bool *leaked); int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr); int xenbus_unmap_ring(struct xenbus_device *dev, grant_handle_t *handles, unsigned int nr_handles, unsigned long *vaddrs); int xenbus_alloc_evtchn(struct xenbus_device *dev, int *port); int xenbus_free_evtchn(struct xenbus_device *dev, int port); enum xenbus_state xenbus_read_driver_state(const char *path); __printf(3, 4) void xenbus_dev_error(struct xenbus_device *dev, int err, const char *fmt, ...); __printf(3, 4) void xenbus_dev_fatal(struct xenbus_device *dev, int err, const char *fmt, ...); const char *xenbus_strstate(enum xenbus_state state); int xenbus_dev_is_online(struct xenbus_device *dev); int xenbus_frontend_closed(struct xenbus_device *dev); extern const struct file_operations xen_xenbus_fops; extern struct xenstore_domain_interface *xen_store_interface; extern int xen_store_evtchn; #endif /* _XEN_XENBUS_H */ features.h 0000644 00000000766 14722073410 0006544 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ /****************************************************************************** * features.h * * Query the features reported by Xen. * * Copyright (c) 2006, Ian Campbell */ #ifndef __XEN_FEATURES_H__ #define __XEN_FEATURES_H__ #include <xen/interface/features.h> void xen_setup_features(void); extern u8 xen_features[XENFEAT_NR_SUBMAPS * 32]; static inline int xen_feature(int flag) { return xen_features[flag]; } #endif /* __ASM_XEN_FEATURES_H__ */ events.h 0000644 00000012260 14722073410 0006222 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _XEN_EVENTS_H #define _XEN_EVENTS_H #include <linux/interrupt.h> #include <linux/irq.h> #ifdef CONFIG_PCI_MSI #include <linux/msi.h> #endif #include <xen/interface/event_channel.h> #include <asm/xen/hypercall.h> #include <asm/xen/events.h> unsigned xen_evtchn_nr_channels(void); int bind_evtchn_to_irq(evtchn_port_t evtchn); int bind_evtchn_to_irq_lateeoi(evtchn_port_t evtchn); int bind_evtchn_to_irqhandler(evtchn_port_t evtchn, irq_handler_t handler, unsigned long irqflags, const char *devname, void *dev_id); int bind_evtchn_to_irqhandler_lateeoi(evtchn_port_t evtchn, irq_handler_t handler, unsigned long irqflags, const char *devname, void *dev_id); int bind_virq_to_irq(unsigned int virq, unsigned int cpu, bool percpu); int bind_virq_to_irqhandler(unsigned int virq, unsigned int cpu, irq_handler_t handler, unsigned long irqflags, const char *devname, void *dev_id); int bind_ipi_to_irqhandler(enum ipi_vector ipi, unsigned int cpu, irq_handler_t handler, unsigned long irqflags, const char *devname, void *dev_id); int bind_interdomain_evtchn_to_irq(unsigned int remote_domain, evtchn_port_t remote_port); int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain, evtchn_port_t remote_port); int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain, evtchn_port_t remote_port, irq_handler_t handler, unsigned long irqflags, const char *devname, void *dev_id); int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain, evtchn_port_t remote_port, irq_handler_t handler, unsigned long irqflags, const char *devname, void *dev_id); /* * Common unbind function for all event sources. Takes IRQ to unbind from. * Automatically closes the underlying event channel (even for bindings * made with bind_evtchn_to_irqhandler()). */ void unbind_from_irqhandler(unsigned int irq, void *dev_id); /* * Send late EOI for an IRQ bound to an event channel via one of the *_lateeoi * functions above. */ void xen_irq_lateeoi(unsigned int irq, unsigned int eoi_flags); /* Signal an event was spurious, i.e. there was no action resulting from it. */ #define XEN_EOI_FLAG_SPURIOUS 0x00000001 #define XEN_IRQ_PRIORITY_MAX EVTCHN_FIFO_PRIORITY_MAX #define XEN_IRQ_PRIORITY_DEFAULT EVTCHN_FIFO_PRIORITY_DEFAULT #define XEN_IRQ_PRIORITY_MIN EVTCHN_FIFO_PRIORITY_MIN int xen_set_irq_priority(unsigned irq, unsigned priority); /* * Allow extra references to event channels exposed to userspace by evtchn */ int evtchn_make_refcounted(unsigned int evtchn); int evtchn_get(unsigned int evtchn); void evtchn_put(unsigned int evtchn); void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector); void rebind_evtchn_irq(int evtchn, int irq); int xen_set_affinity_evtchn(struct irq_desc *desc, unsigned int tcpu); static inline void notify_remote_via_evtchn(int port) { struct evtchn_send send = { .port = port }; (void)HYPERVISOR_event_channel_op(EVTCHNOP_send, &send); } void notify_remote_via_irq(int irq); void xen_irq_resume(void); /* Clear an irq's pending state, in preparation for polling on it */ void xen_clear_irq_pending(int irq); void xen_set_irq_pending(int irq); bool xen_test_irq_pending(int irq); /* Poll waiting for an irq to become pending. In the usual case, the irq will be disabled so it won't deliver an interrupt. */ void xen_poll_irq(int irq); /* Poll waiting for an irq to become pending with a timeout. In the usual case, * the irq will be disabled so it won't deliver an interrupt. */ void xen_poll_irq_timeout(int irq, u64 timeout); /* Determine the IRQ which is bound to an event channel */ unsigned irq_from_evtchn(unsigned int evtchn); int irq_from_virq(unsigned int cpu, unsigned int virq); unsigned int evtchn_from_irq(unsigned irq); #ifdef CONFIG_XEN_PVHVM /* Xen HVM evtchn vector callback */ void xen_hvm_callback_vector(void); #ifdef CONFIG_TRACING #define trace_xen_hvm_callback_vector xen_hvm_callback_vector #endif #endif int xen_set_callback_via(uint64_t via); void xen_evtchn_do_upcall(struct pt_regs *regs); void xen_hvm_evtchn_do_upcall(void); /* Bind a pirq for a physical interrupt to an irq. */ int xen_bind_pirq_gsi_to_irq(unsigned gsi, unsigned pirq, int shareable, char *name); #ifdef CONFIG_PCI_MSI /* Allocate a pirq for a MSI style physical interrupt. */ int xen_allocate_pirq_msi(struct pci_dev *dev, struct msi_desc *msidesc); /* Bind an PSI pirq to an irq. */ int xen_bind_pirq_msi_to_irq(struct pci_dev *dev, struct msi_desc *msidesc, int pirq, int nvec, const char *name, domid_t domid); #endif /* De-allocates the above mentioned physical interrupt. */ int xen_destroy_irq(int irq); /* Return irq from pirq */ int xen_irq_from_pirq(unsigned pirq); /* Return the pirq allocated to the irq. */ int xen_pirq_from_irq(unsigned irq); /* Return the irq allocated to the gsi */ int xen_irq_from_gsi(unsigned gsi); /* Determine whether to ignore this IRQ if it is passed to a guest. */ int xen_test_irq_shared(int irq); /* initialize Xen IRQ subsystem */ void xen_init_IRQ(void); #endif /* _XEN_EVENTS_H */ xen.h 0000644 00000002455 14722073410 0005515 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _XEN_XEN_H #define _XEN_XEN_H enum xen_domain_type { XEN_NATIVE, /* running on bare hardware */ XEN_PV_DOMAIN, /* running in a PV domain */ XEN_HVM_DOMAIN, /* running in a Xen hvm domain */ }; #ifdef CONFIG_XEN extern enum xen_domain_type xen_domain_type; #else #define xen_domain_type XEN_NATIVE #endif #ifdef CONFIG_XEN_PVH extern bool xen_pvh; #else #define xen_pvh 0 #endif #define xen_domain() (xen_domain_type != XEN_NATIVE) #define xen_pv_domain() (xen_domain_type == XEN_PV_DOMAIN) #define xen_hvm_domain() (xen_domain_type == XEN_HVM_DOMAIN) #define xen_pvh_domain() (xen_pvh) #include <linux/types.h> extern uint32_t xen_start_flags; #include <xen/interface/hvm/start_info.h> extern struct hvm_start_info pvh_start_info; #ifdef CONFIG_XEN_DOM0 #include <xen/interface/xen.h> #include <asm/xen/hypervisor.h> #define xen_initial_domain() (xen_domain() && \ (xen_start_flags & SIF_INITDOMAIN)) #else /* !CONFIG_XEN_DOM0 */ #define xen_initial_domain() (0) #endif /* CONFIG_XEN_DOM0 */ struct bio_vec; struct page; bool xen_biovec_phys_mergeable(const struct bio_vec *vec1, const struct page *page); #if defined(CONFIG_MEMORY_HOTPLUG) && defined(CONFIG_XEN_BALLOON) extern u64 xen_saved_max_mem_size; #endif #endif /* _XEN_XEN_H */ platform_pci.h 0000644 00000004056 14722073410 0007401 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _XEN_PLATFORM_PCI_H #define _XEN_PLATFORM_PCI_H #define XEN_IOPORT_MAGIC_VAL 0x49d2 #define XEN_IOPORT_LINUX_PRODNUM 0x0003 #define XEN_IOPORT_LINUX_DRVVER 0x0001 #define XEN_IOPORT_BASE 0x10 #define XEN_IOPORT_PLATFLAGS (XEN_IOPORT_BASE + 0) /* 1 byte access (R/W) */ #define XEN_IOPORT_MAGIC (XEN_IOPORT_BASE + 0) /* 2 byte access (R) */ #define XEN_IOPORT_UNPLUG (XEN_IOPORT_BASE + 0) /* 2 byte access (W) */ #define XEN_IOPORT_DRVVER (XEN_IOPORT_BASE + 0) /* 4 byte access (W) */ #define XEN_IOPORT_SYSLOG (XEN_IOPORT_BASE + 2) /* 1 byte access (W) */ #define XEN_IOPORT_PROTOVER (XEN_IOPORT_BASE + 2) /* 1 byte access (R) */ #define XEN_IOPORT_PRODNUM (XEN_IOPORT_BASE + 2) /* 2 byte access (W) */ #define XEN_UNPLUG_ALL_IDE_DISKS (1<<0) #define XEN_UNPLUG_ALL_NICS (1<<1) #define XEN_UNPLUG_AUX_IDE_DISKS (1<<2) #define XEN_UNPLUG_ALL (XEN_UNPLUG_ALL_IDE_DISKS|\ XEN_UNPLUG_ALL_NICS|\ XEN_UNPLUG_AUX_IDE_DISKS) #define XEN_UNPLUG_UNNECESSARY (1<<16) #define XEN_UNPLUG_NEVER (1<<17) static inline int xen_must_unplug_nics(void) { #if (defined(CONFIG_XEN_NETDEV_FRONTEND) || \ defined(CONFIG_XEN_NETDEV_FRONTEND_MODULE)) && \ defined(CONFIG_XEN_PVHVM) return 1; #else return 0; #endif } static inline int xen_must_unplug_disks(void) { #if (defined(CONFIG_XEN_BLKDEV_FRONTEND) || \ defined(CONFIG_XEN_BLKDEV_FRONTEND_MODULE)) && \ defined(CONFIG_XEN_PVHVM) return 1; #else return 0; #endif } #if defined(CONFIG_XEN_PVHVM) extern bool xen_has_pv_devices(void); extern bool xen_has_pv_disk_devices(void); extern bool xen_has_pv_nic_devices(void); extern bool xen_has_pv_and_legacy_disk_devices(void); #else static inline bool xen_has_pv_devices(void) { return IS_ENABLED(CONFIG_XEN); } static inline bool xen_has_pv_disk_devices(void) { return IS_ENABLED(CONFIG_XEN); } static inline bool xen_has_pv_nic_devices(void) { return IS_ENABLED(CONFIG_XEN); } static inline bool xen_has_pv_and_legacy_disk_devices(void) { return false; } #endif #endif /* _XEN_PLATFORM_PCI_H */ xen-front-pgdir-shbuf.h 0000644 00000004502 14722073410 0011046 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 OR MIT */ /* * Xen frontend/backend page directory based shared buffer * helper module. * * Copyright (C) 2018 EPAM Systems Inc. * * Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com> */ #ifndef __XEN_FRONT_PGDIR_SHBUF_H_ #define __XEN_FRONT_PGDIR_SHBUF_H_ #include <linux/kernel.h> #include <xen/grant_table.h> struct xen_front_pgdir_shbuf_ops; struct xen_front_pgdir_shbuf { /* * Number of references granted for the backend use: * * - for frontend allocated/imported buffers this holds the number * of grant references for the page directory and the pages * of the buffer * * - for the buffer provided by the backend this only holds the number * of grant references for the page directory itself as grant * references for the buffer will be provided by the backend. */ int num_grefs; grant_ref_t *grefs; /* Page directory backing storage. */ u8 *directory; /* * Number of pages for the shared buffer itself (excluding the page * directory). */ int num_pages; /* * Backing storage of the shared buffer: these are the pages being * shared. */ struct page **pages; struct xenbus_device *xb_dev; /* These are the ops used internally depending on be_alloc mode. */ const struct xen_front_pgdir_shbuf_ops *ops; /* Xen map handles for the buffer allocated by the backend. */ grant_handle_t *backend_map_handles; }; struct xen_front_pgdir_shbuf_cfg { struct xenbus_device *xb_dev; /* Number of pages of the buffer backing storage. */ int num_pages; /* Pages of the buffer to be shared. */ struct page **pages; /* * This is allocated outside because there are use-cases when * the buffer structure is allocated as a part of a bigger one. */ struct xen_front_pgdir_shbuf *pgdir; /* * Mode of grant reference sharing: if set then backend will share * grant references to the buffer with the frontend. */ int be_alloc; }; int xen_front_pgdir_shbuf_alloc(struct xen_front_pgdir_shbuf_cfg *cfg); grant_ref_t xen_front_pgdir_shbuf_get_dir_start(struct xen_front_pgdir_shbuf *buf); int xen_front_pgdir_shbuf_map(struct xen_front_pgdir_shbuf *buf); int xen_front_pgdir_shbuf_unmap(struct xen_front_pgdir_shbuf *buf); void xen_front_pgdir_shbuf_free(struct xen_front_pgdir_shbuf *buf); #endif /* __XEN_FRONT_PGDIR_SHBUF_H_ */ swiotlb-xen.h 0000644 00000000755 14722073410 0007177 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef __LINUX_SWIOTLB_XEN_H #define __LINUX_SWIOTLB_XEN_H #include <linux/swiotlb.h> void xen_dma_sync_for_cpu(dma_addr_t handle, phys_addr_t paddr, size_t size, enum dma_data_direction dir); void xen_dma_sync_for_device(dma_addr_t handle, phys_addr_t paddr, size_t size, enum dma_data_direction dir); extern int xen_swiotlb_init(int verbose, bool early); extern const struct dma_map_ops xen_swiotlb_dma_ops; #endif /* __LINUX_SWIOTLB_XEN_H */ acpi.h 0000644 00000006644 14722073410 0005643 0 ustar 00 /****************************************************************************** * acpi.h * acpi file for domain 0 kernel * * Copyright (c) 2011 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> * Copyright (c) 2011 Yu Ke <ke.yu@intel.com> * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License version 2 * as published by the Free Software Foundation; or, when distributed * separately from the Linux kernel or incorporated into other * software packages, subject to the following license: * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this source file (the "Software"), to deal in the Software without * restriction, including without limitation the rights to use, copy, modify, * merge, publish, distribute, sublicense, and/or sell copies of the Software, * and to permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef _XEN_ACPI_H #define _XEN_ACPI_H #include <linux/types.h> #ifdef CONFIG_XEN_DOM0 #include <asm/xen/hypervisor.h> #include <xen/xen.h> #include <linux/acpi.h> #define ACPI_MEMORY_DEVICE_CLASS "memory" #define ACPI_MEMORY_DEVICE_HID "PNP0C80" #define ACPI_MEMORY_DEVICE_NAME "Hotplug Mem Device" int xen_stub_memory_device_init(void); void xen_stub_memory_device_exit(void); #define ACPI_PROCESSOR_CLASS "processor" #define ACPI_PROCESSOR_DEVICE_HID "ACPI0007" #define ACPI_PROCESSOR_DEVICE_NAME "Processor" int xen_stub_processor_init(void); void xen_stub_processor_exit(void); void xen_pcpu_hotplug_sync(void); int xen_pcpu_id(uint32_t acpi_id); static inline int xen_acpi_get_pxm(acpi_handle h) { unsigned long long pxm; acpi_status status; acpi_handle handle; acpi_handle phandle = h; do { handle = phandle; status = acpi_evaluate_integer(handle, "_PXM", NULL, &pxm); if (ACPI_SUCCESS(status)) return pxm; status = acpi_get_parent(handle, &phandle); } while (ACPI_SUCCESS(status)); return -ENXIO; } int xen_acpi_notify_hypervisor_sleep(u8 sleep_state, u32 pm1a_cnt, u32 pm1b_cnd); int xen_acpi_notify_hypervisor_extended_sleep(u8 sleep_state, u32 val_a, u32 val_b); static inline int xen_acpi_suspend_lowlevel(void) { /* * Xen will save and restore CPU context, so * we can skip that and just go straight to * the suspend. */ acpi_enter_sleep_state(ACPI_STATE_S3); return 0; } static inline void xen_acpi_sleep_register(void) { if (xen_initial_domain()) { acpi_os_set_prepare_sleep( &xen_acpi_notify_hypervisor_sleep); acpi_os_set_prepare_extended_sleep( &xen_acpi_notify_hypervisor_extended_sleep); acpi_suspend_lowlevel = xen_acpi_suspend_lowlevel; } } #else static inline void xen_acpi_sleep_register(void) { } #endif #endif /* _XEN_ACPI_H */ grant_table.h 0000644 00000025116 14722073410 0007204 0 ustar 00 /****************************************************************************** * grant_table.h * * Two sets of functionality: * 1. Granting foreign access to our memory reservation. * 2. Accessing others' memory reservations via grant references. * (i.e., mechanisms for both sender and recipient of grant references) * * Copyright (c) 2004-2005, K A Fraser * Copyright (c) 2005, Christopher Clark * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License version 2 * as published by the Free Software Foundation; or, when distributed * separately from the Linux kernel or incorporated into other * software packages, subject to the following license: * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this source file (the "Software"), to deal in the Software without * restriction, including without limitation the rights to use, copy, modify, * merge, publish, distribute, sublicense, and/or sell copies of the Software, * and to permit persons to whom the Software is furnished to do so, subject to * the following conditions: * * The above copyright notice and this permission notice shall be included in * all copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS * IN THE SOFTWARE. */ #ifndef __ASM_GNTTAB_H__ #define __ASM_GNTTAB_H__ #include <asm/page.h> #include <xen/interface/xen.h> #include <xen/interface/grant_table.h> #include <asm/xen/hypervisor.h> #include <xen/features.h> #include <xen/page.h> #include <linux/mm_types.h> #include <linux/page-flags.h> #include <linux/kernel.h> #define GNTTAB_RESERVED_XENSTORE 1 /* NR_GRANT_FRAMES must be less than or equal to that configured in Xen */ #define NR_GRANT_FRAMES 4 struct gnttab_free_callback { struct gnttab_free_callback *next; void (*fn)(void *); void *arg; u16 count; }; struct gntab_unmap_queue_data; typedef void (*gnttab_unmap_refs_done)(int result, struct gntab_unmap_queue_data *data); struct gntab_unmap_queue_data { struct delayed_work gnttab_work; void *data; gnttab_unmap_refs_done done; struct gnttab_unmap_grant_ref *unmap_ops; struct gnttab_unmap_grant_ref *kunmap_ops; struct page **pages; unsigned int count; unsigned int age; }; int gnttab_init(void); int gnttab_suspend(void); int gnttab_resume(void); int gnttab_grant_foreign_access(domid_t domid, unsigned long frame, int readonly); /* * End access through the given grant reference, iff the grant entry is no * longer in use. Return 1 if the grant entry was freed, 0 if it is still in * use. */ int gnttab_end_foreign_access_ref(grant_ref_t ref, int readonly); /* * Eventually end access through the given grant reference, and once that * access has been ended, free the given page too. Access will be ended * immediately iff the grant entry is not in use, otherwise it will happen * some time later. page may be 0, in which case no freeing will occur. * Note that the granted page might still be accessed (read or write) by the * other side after gnttab_end_foreign_access() returns, so even if page was * specified as 0 it is not allowed to just reuse the page for other * purposes immediately. gnttab_end_foreign_access() will take an additional * reference to the granted page in this case, which is dropped only after * the grant is no longer in use. * This requires that multi page allocations for areas subject to * gnttab_end_foreign_access() are done via alloc_pages_exact() (and freeing * via free_pages_exact()) in order to avoid high order pages. */ void gnttab_end_foreign_access(grant_ref_t ref, int readonly, unsigned long page); /* * End access through the given grant reference, iff the grant entry is * no longer in use. In case of success ending foreign access, the * grant reference is deallocated. * Return 1 if the grant entry was freed, 0 if it is still in use. */ int gnttab_try_end_foreign_access(grant_ref_t ref); int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn); unsigned long gnttab_end_foreign_transfer_ref(grant_ref_t ref); unsigned long gnttab_end_foreign_transfer(grant_ref_t ref); /* * operations on reserved batches of grant references */ int gnttab_alloc_grant_references(u16 count, grant_ref_t *pprivate_head); void gnttab_free_grant_reference(grant_ref_t ref); void gnttab_free_grant_references(grant_ref_t head); int gnttab_empty_grant_references(const grant_ref_t *pprivate_head); int gnttab_claim_grant_reference(grant_ref_t *pprivate_head); void gnttab_release_grant_reference(grant_ref_t *private_head, grant_ref_t release); void gnttab_request_free_callback(struct gnttab_free_callback *callback, void (*fn)(void *), void *arg, u16 count); void gnttab_cancel_free_callback(struct gnttab_free_callback *callback); void gnttab_grant_foreign_access_ref(grant_ref_t ref, domid_t domid, unsigned long frame, int readonly); /* Give access to the first 4K of the page */ static inline void gnttab_page_grant_foreign_access_ref_one( grant_ref_t ref, domid_t domid, struct page *page, int readonly) { gnttab_grant_foreign_access_ref(ref, domid, xen_page_to_gfn(page), readonly); } void gnttab_grant_foreign_transfer_ref(grant_ref_t, domid_t domid, unsigned long pfn); static inline void gnttab_set_map_op(struct gnttab_map_grant_ref *map, phys_addr_t addr, uint32_t flags, grant_ref_t ref, domid_t domid) { if (flags & GNTMAP_contains_pte) map->host_addr = addr; else if (xen_feature(XENFEAT_auto_translated_physmap)) map->host_addr = __pa(addr); else map->host_addr = addr; map->flags = flags; map->ref = ref; map->dom = domid; map->status = 1; /* arbitrary positive value */ } static inline void gnttab_set_unmap_op(struct gnttab_unmap_grant_ref *unmap, phys_addr_t addr, uint32_t flags, grant_handle_t handle) { if (flags & GNTMAP_contains_pte) unmap->host_addr = addr; else if (xen_feature(XENFEAT_auto_translated_physmap)) unmap->host_addr = __pa(addr); else unmap->host_addr = addr; unmap->handle = handle; unmap->dev_bus_addr = 0; } int arch_gnttab_init(unsigned long nr_shared, unsigned long nr_status); int arch_gnttab_map_shared(xen_pfn_t *frames, unsigned long nr_gframes, unsigned long max_nr_gframes, void **__shared); int arch_gnttab_map_status(uint64_t *frames, unsigned long nr_gframes, unsigned long max_nr_gframes, grant_status_t **__shared); void arch_gnttab_unmap(void *shared, unsigned long nr_gframes); struct grant_frames { xen_pfn_t *pfn; unsigned int count; void *vaddr; }; extern struct grant_frames xen_auto_xlat_grant_frames; unsigned int gnttab_max_grant_frames(void); int gnttab_setup_auto_xlat_frames(phys_addr_t addr); void gnttab_free_auto_xlat_frames(void); #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr)) int gnttab_alloc_pages(int nr_pages, struct page **pages); void gnttab_free_pages(int nr_pages, struct page **pages); #ifdef CONFIG_XEN_GRANT_DMA_ALLOC struct gnttab_dma_alloc_args { /* Device for which DMA memory will be/was allocated. */ struct device *dev; /* If set then DMA buffer is coherent and write-combine otherwise. */ bool coherent; int nr_pages; struct page **pages; xen_pfn_t *frames; void *vaddr; dma_addr_t dev_bus_addr; }; int gnttab_dma_alloc_pages(struct gnttab_dma_alloc_args *args); int gnttab_dma_free_pages(struct gnttab_dma_alloc_args *args); #endif int gnttab_pages_set_private(int nr_pages, struct page **pages); void gnttab_pages_clear_private(int nr_pages, struct page **pages); int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops, struct gnttab_map_grant_ref *kmap_ops, struct page **pages, unsigned int count); int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops, struct gnttab_unmap_grant_ref *kunmap_ops, struct page **pages, unsigned int count); void gnttab_unmap_refs_async(struct gntab_unmap_queue_data* item); int gnttab_unmap_refs_sync(struct gntab_unmap_queue_data *item); /* Perform a batch of grant map/copy operations. Retry every batch slot * for which the hypervisor returns GNTST_eagain. This is typically due * to paged out target frames. * * Will retry for 1, 2, ... 255 ms, i.e. 256 times during 32 seconds. * * Return value in each iand every status field of the batch guaranteed * to not be GNTST_eagain. */ void gnttab_batch_map(struct gnttab_map_grant_ref *batch, unsigned count); void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count); struct xen_page_foreign { domid_t domid; grant_ref_t gref; }; static inline struct xen_page_foreign *xen_page_foreign(struct page *page) { if (!PageForeign(page)) return NULL; #if BITS_PER_LONG < 64 return (struct xen_page_foreign *)page->private; #else BUILD_BUG_ON(sizeof(struct xen_page_foreign) > BITS_PER_LONG); return (struct xen_page_foreign *)&page->private; #endif } /* Split Linux page in chunk of the size of the grant and call fn * * Parameters of fn: * gfn: guest frame number * offset: offset in the grant * len: length of the data in the grant. * data: internal information */ typedef void (*xen_grant_fn_t)(unsigned long gfn, unsigned int offset, unsigned int len, void *data); void gnttab_foreach_grant_in_range(struct page *page, unsigned int offset, unsigned int len, xen_grant_fn_t fn, void *data); /* Helper to get to call fn only on the first "grant chunk" */ static inline void gnttab_for_one_grant(struct page *page, unsigned int offset, unsigned len, xen_grant_fn_t fn, void *data) { /* The first request is limited to the size of one grant */ len = min_t(unsigned int, XEN_PAGE_SIZE - (offset & ~XEN_PAGE_MASK), len); gnttab_foreach_grant_in_range(page, offset, len, fn, data); } /* Get @nr_grefs grants from an array of page and call fn for each grant */ void gnttab_foreach_grant(struct page **pages, unsigned int nr_grefs, xen_grant_fn_t fn, void *data); /* Get the number of grant in a specified region * * start: Offset from the beginning of the first page * len: total length of data (can cross multiple page) */ static inline unsigned int gnttab_count_grant(unsigned int start, unsigned int len) { return XEN_PFN_UP(xen_offset_in_page(start) + len); } #endif /* __ASM_GNTTAB_H__ */ hvc-console.h 0000644 00000001006 14722073410 0007132 0 ustar 00 /* SPDX-License-Identifier: GPL-2.0 */ #ifndef XEN_HVC_CONSOLE_H #define XEN_HVC_CONSOLE_H extern struct console xenboot_console; #ifdef CONFIG_HVC_XEN void xen_console_resume(void); void xen_raw_console_write(const char *str); __printf(1, 2) void xen_raw_printk(const char *fmt, ...); #else static inline void xen_console_resume(void) { } static inline void xen_raw_console_write(const char *str) { } static inline __printf(1, 2) void xen_raw_printk(const char *fmt, ...) { } #endif #endif /* XEN_HVC_CONSOLE_H */
| ver. 1.4 |
Github
|
.
| PHP 7.4.3-4ubuntu2.24 | Генерация страницы: 0.03 |
proxy
|
phpinfo
|
Настройка