Ceph 0.94 发布, 此版本主要更新信息如下:
RADOS
Performance: a range of improvements have been made in the OSD and
client-side librados code that improve the throughput on flash backends
and improve parallelism and scaling on fast machines. Simplified
RGW
deployment: the ceph-deploy tool now has a new ‘ceph-deploy rgw create
HOST’ command that quickly deploys a instance of the S3/Swift gateway
using the embedded Civetweb server. This is vastly simpler than the
previous Apache-based deployment. There are a few rough edges (e.g.,
around SSL support) but we encourage users to try the new method.
RGW
object versioning: RGW now supports the S3 object versioning API, which
preserves old version of objects instead of overwriting them. RGW bucket sharding: RGW can now shard the bucket index for large buckets across, improving performance for very large buckets.
RBD
object maps: RBD now has an object map function that tracks which parts
of the image are allocating, improving performance for clones and for
commands like export and delete. RBD mandatory locking:
RBD has a new mandatory locking framework (still disabled by default)
that adds additional safeguards to prevent multiple clients from using
the same image at the same time. RBD copy-on-read: RBD now supports copy-on-read for image clones, improving performance for some workloads.
CephFS
snapshot improvements: Many many bugs have been fixed with CephFS
snapshots. Although they are still disabled by default, stability has
improved significantly. CephFS Recovery tools: We have
built some journal recovery and diagnostic tools. Stability and
performance of single-MDS systems is vastly improved in Giant, and more
improvements have been made now in Hammer. Although we still recommend
caution when storing important data in CephFS, we do encourage testing
for non-critical workloads so that we can better guage the feature,
usability, performance, and stability gaps. CRUSH
improvements: We have added a new straw2 bucket algorithm that reduces
the amount of data migration required when changes are made to the
cluster.
RADOS cache tiering: A series of changes have been made in the cache tiering code that improve performance and reduce latency.
Experimental RDMA support: There is now experimental support for RDMA via the Accelio (libxio) library. New
administrator commands: The ‘ceph osd df’ command shows pertinent
details on OSD disk utilizations. The ‘ceph pg ls …’ command makes it
much simpler to query PG states while diagnosing cluster issues.
详细信息请查看发行页面。 此版本现已提供下载: https://github.com/ceph/ceph/archive/v0.94.zip
Ceph是加州大学Santa Cruz分校的Sage
Weil(DreamHost的联合创始人)专为博士论文设计的新一代自由软件分布式文件系统。自2007年毕业之后,Sage开始全职投入到Ceph开
发之中,使其能适用于生产环境。Ceph的主要目标是设计成基于POSIX的没有单点故障的分布式文件系统,使数据能容错和无缝的复制。2010年3
月,Linus Torvalds将Ceph client合并到内 核2.6.34中。IBM开发者园地的一篇文章 探讨了Ceph的架构,它的容错实现和简化海量数据管理的功能。 Ceph 中文文档:http://docs.openfans.org/ceph |