This part of the 4.x documentation is for eZ Publish 4.0, only reference section is common for all eZ Publish 4.x versions as well as eZ Publish 5.x "LegacyStack", please select the version you are using for the most up to date documentation! |
The cluster file handler mechanism makes it possible to store, retrieve, rename, delete, etc. files using the database. The following file handlers are known to the system by default:
Note that eZFS and eZFS2 file handlers do not allow actual eZ publish clustering by using multiple servers. Use eZDB and eZDFS for cluster file handling.
This is the default file handler which makes it possible to use the file system when dealing with files.
This is the enhanced standard file handler, with better concurrency handling. It requires linux or PHP 5.3 on windows, and is still considered experimental.
This is the database file handler. It makes it possible to use the database when dealing with files (in a cluster environment, this would typically be images, uploaded binary files and content-related caches, etc.). It is split into different back-ends that are compatible with the supported database engines. The default back-ends are located in the "kernel/classes/clusterfilehandlers/dbbackends" directory (currently only the back-end for MySQL).
Cache files are copied locally when used by a front-end. When using eZ DB File Handler both the metadata and the binary data will be stored using the database, but the metadata will be stored in the ezdbfile table and the binary data is split in chunks and will be stored in ezdbfile_data table.
Currently supported databases for this file handler are MySQL and Oracle (when using the eZOracle extension).
This is the Distributed File System handler with a DB overlay. This handler is required for NFS-based architectures. It clusters by storing the cluster files mainly on NFS (the distributed file system), while the file metadata (size, mtime, expiry status) are maintained in a database table similar to the one used by eZ DB file handler. NFS is used to read and write the reference copy of clustered files. Cache files are copied locally when used by a front-end, whereas images and binary files (when accessed directly via the browser) will be streamed directly from NFS.
Note: The eZ DFS File Handler is not available in eZ Publish 4.1. This documentation applies to eZ Publish 4.2 and above as regards to eZ DFS File Handler.
High traffic on the binary files will not be handled well by the cluster database. In case of high traffic it is recommended to use Varnish or Squid.
Currently only MySQL is supported as database for this file handler.
The two most important aspects of the eZ DFS architecture are the cluster database and the NFS mount point. The first aspect implies that the database structure must be created manually. The definition of this table can be found in the eZ DFS MySQL driver class file located in the root of your eZ Publish installation here:
kernel/private/classes/clusterfilehandlers/dfsbackends/mysql.php
Since eZ DFS is based on NFS, each eZ Publish installation sharing the same relational database must use the same cluster database and each should have a local mount point to the same NFS export. The NFS server has to be available and writeable by the webserver's user on each eZ Publish server. Also it recommended that each eZ Publish server is configured in the exact same way. Refer to your system and server manual on how to configure this for your system. It is required that each eZ Publish installations sets the NFS mount point in the global override of their settings/file.ini configuration file to the same location. This must be done in the configuration group "[eZDFSClusteringSettings]" setting "MountPointPath=". The NFS mount point is a local folder on each eZ Publish server that links to the network file system where the handler stores the files.
It is important to know that var directories should never be shared amongst instances, since they will then automatically be synchronized. This is valid for both eZ DB and eZ DFS, because it is the cluster handler that takes care of synchronizing data from and to the centralized repository.
For more information visit the chapter "setting it up for a eZDFS file handler".